That is the way IM "convert" (and most other IM commands) are designed to work, and is appropriate in most situations. It is only with 'Really Massive Images' that you have problems with memory use.
Using a Q8 version of IM for handling large images is your first step. That simple method halves memory usage, but makes more complex processing (beyond simple crops) produce lesser quality images.
However IM can still process large images by offloading memory to memory mapped disk cache. This automatically happens when memory starts to become a problem, but you can also force it using the
-limit option. It is however slower (very much slower)
See IM Examples, File Handling, Really Massive Image Handling
http://www.imagemagick.org/Usage/files/#massive
One solution to reduce memory use is to do each tile crop and save the result, one tile (or perhaps one row) at a time.
Code: Select all
convert input_image.png \
\( +clone -crop 128x128+0+0 +repage -write tile_001.png +delete \) \
\( +clone -crop 128x128+128+0 +repage -write tile_001.png +delete \) \
\( +clone -crop 128x128+256+0 +repage -write tile_001.png +delete \) \
\( +clone -crop 128x128+384+0 +repage -write tile_001.png +delete \) \
\( +clone -crop 128x128+512+0 +repage -write tile_001.png +delete \) \
... \
\( +clone -crop 128x128+0+128 +repage -write tile_001.png +delete \) \
\( +clone -crop 128x128+128+128 +repage -write tile_001.png +delete \) \
\( +clone -crop 128x128+256+128 +repage -write tile_001.png +delete \) \
... \
null:
The command can be programmically generated, though may hit command line length limits.
Even with the "clone" it will have a rough memory cost of just: original_image + tile_image
ASIDE: IMv7 (in development) will have 'co-processing' capabilitys. That is you can run a "convert"-like command in the background, and then retrieve information about images, and send IM image processing commands from scripted loops. It would be ideal for doign image processing like the above.
Alternative... Streaming...
The better way is not to read the WHOLE image into memory. And this is where streaming image processing works.
Streaming processes only read one 'row of pixels' into memory at a time, and as such has a far lower memory foot print.
The "stream" command can for example extract one 'crop' area from the input image. However that means reading an image once for event tile you want to extract. This is probably not what you are want either. However it is likely to be faster than a memory mapped solution.
Stream processors exist in the image processing package "PbmPlus" or "NetPbm" but again it seems to be limited to just extracting one crop area from an image.
The ideal solution would be a "stream" like image processor that while reading each row from the input image, also opens M output images for a crop of MxN tiles of WxH pixels. Then as each row is processed it outputs each segment of W pixels to the M individual streams. After H rows, the M images are closed and M new images options for the next set.
Something like this would be a great addition to any image processing library (PbmPlus or IM).
Unfortunately I know of know such program
Even the lesser goal of a stream image processor that can separate a large images into a stream of 'rows of tiles' images would be a useful addition! Each smaller row could then be read in and processed individually at smaller memory cost.
Please report here if you come across any other solutions, there is a lot of people interested in this type of thing. Especially me!
Of special interest is actually the reverse of tile cropping.. That is converting multiple tiles back into a huge image, without using a lot of memory (EG a streaming "montage").