I'm fairly sure this is because of the approach I'm using - possibly not feeding the files to the convert command properly, allowing it to purge memory in use before starting the next one. I'm totally willing to abandon the bash script method if someone has a better idea. But I'm not skilled in any scripting language enough to know where I'm failing. Here's my bash script at the moment. The latest version I've tried reads a list of folder paths where the files exist from a text file, and then loops through the files in that folder, then goes on to the next folder (by reading the next line in the text file). There are no duplications in the file names, so it's no problem to write the pdf files to a single output directory but that's not an essential requirement.
Environment: Machine running the conversion is CentOS 5.6 with ImageMagick 6.28. 64-bit, 2 x quad core processors and 8 GB RAM
Source tif files are on a mounted NTFS file system from an external USB drive.
Code: Select all
#!/bin/bash
mydirs=`cat /tmp/dirlist.txt`
for d in $mydirs ; do
for f in `find $d -type f -name "*.tif"` ; do
filename=$(basename $f)
echo "Now Processing File: ${filename}"
let filelength=${#filename}-4
filenamenoext=${filename:0:${filelength}}
if [ ! -f "/home/myhomedir/$filenamenoext".pdf ]
then
convert "$f" "/home/myhomedir/$filenamenoext".pdf
fi
done
done