Expand a stack of files to equal size
-
- Posts: 18
- Joined: 2015-09-28T00:39:38-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
Heh. I just read a passing note that -clone isn't really copying pixels en masse.
I'm undecided whether I love the optimization, or hate the unpredictability of what operations will really cause CPU fan to spin up... anyway, I'm reevaluation options.
Thanks for all the food for thought and experimentation, I definitely got pointed in some promising directions.
I'm undecided whether I love the optimization, or hate the unpredictability of what operations will really cause CPU fan to spin up... anyway, I'm reevaluation options.
Thanks for all the food for thought and experimentation, I definitely got pointed in some promising directions.
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: Expand a stack of files to equal size
That is correct. "-clone" doesn't copy any pixel values. However, any operation that modifies pixels of a clone will first cause the pixels to be copied, before the operation itself is performed.
In your case, the operation is "-layers merge", and I expect this doesn't require any pixels to be copied to clones. But it does involve reading every pixel in all the input images to make a merged image, which is the most time-consuming part of the command, and it is almost totally a waste of time as we only want the size.
The fastest way is probably a custom program that reads the input files, calculates the maximal size (which is trivial in a C program), extends each image to that size, and writes them to disk.
However, you say you "want to align them for denoising and despeckling", so I'm not sure why you first want to make them the same size. I would expect the first stage would be to find the required offsets that align the images, so they can be merged or otherwise processed with those offsets.
In your case, the operation is "-layers merge", and I expect this doesn't require any pixels to be copied to clones. But it does involve reading every pixel in all the input images to make a merged image, which is the most time-consuming part of the command, and it is almost totally a waste of time as we only want the size.
The fastest way is probably a custom program that reads the input files, calculates the maximal size (which is trivial in a C program), extends each image to that size, and writes them to disk.
However, you say you "want to align them for denoising and despeckling", so I'm not sure why you first want to make them the same size. I would expect the first stage would be to find the required offsets that align the images, so they can be merged or otherwise processed with those offsets.
snibgo's IM pages: im.snibgo.com
-
- Posts: 18
- Joined: 2015-09-28T00:39:38-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
The alignment program insists that the input images are of equal size
-
- Posts: 18
- Joined: 2015-09-28T00:39:38-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
align_image_stack, a part of the Hugin suite. It's a photo stitching toolset; being a toolset, it has been used for compositing HDR images and for quality improvement.
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Expand a stack of files to equal size
Did you try my command to get the max size and then to extent each image. See viewtopic.php?f=1&t=33496#p153646
-
- Posts: 18
- Joined: 2015-09-28T00:39:38-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
@fmw42 I'm in the process of trying out various approaches (not all of them in IM).
It's a somewhat bumpy ride and will take time, but I'll come back with results once I have them.
Or with questions. Not very likely though - I read up on all the options that were mentioned here, and it looks like you provided me with enough information that I can now find my way through the docs. Many thanks for that!
It's a somewhat bumpy ride and will take time, but I'll come back with results once I have them.
Or with questions. Not very likely though - I read up on all the options that were mentioned here, and it looks like you provided me with enough information that I can now find my way through the docs. Many thanks for that!
-
- Posts: 18
- Joined: 2015-09-28T00:39:38-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
@fmw42 I just got around to doing some timings, with some pretty unexpected results.
Not a real benchmark because I didn't rerun many commands, so any difference below a factor of two is probably meaningless. Still, I got interesting results.
FYI files are now 00-raw-scans/NN/page-MMMM.tif, where NN ranges from 00 to 15 and MMMM ranges from 0000 and 0089. Files with the same page-MMMM.tif name are supposed to be composited in the end.
Here we do:
An earlier run gave me something around 50 seconds.
Could be the filesystem cache warming up, or a Youtube video that stopped playing, or just chance.
10% variation isn't too uncommon though if you do not do a carefully controlled benchmark, so this isn't too surprising.
-ping is considerably faster as expected.
I would have expected a better speed-up though. Seems like IM is doing a lot more than reading just the width and the height. Well, whatever IM thinks is "reading image characteristics"... okay, noted, I won't be able to improve anything on that, moving on.
Just a controlling measurement to get an idea how much time the various find commands are actually taking up.
It's a bit sluggish for what it's doing, so I'll probably want to be smarter about getting the list of pages, but I guess that's not of much interest to the IM community
I varied this a bit with bigger and smaller terminal windows. It used to be a thing that scrolling could place a serious burden on a CPU.
Doesn't seem to be the case anymore. Either because I have enough CPU cores, or because the pixel shoving is offloaded to the GPU.
Seems like -layers trim-bounds isn't reading pixels either.
Now that's a nice find.
Not a real benchmark because I didn't rerun many commands, so any difference below a factor of two is probably meaningless. Still, I got interesting results.
FYI files are now 00-raw-scans/NN/page-MMMM.tif, where NN ranges from 00 to 15 and MMMM ranges from 0000 and 0089. Files with the same page-MMMM.tif name are supposed to be composited in the end.
Here we do:
Code: Select all
# Printing the pixel size of each file
time \
for f in $(find 00-raw-scans -type f -name "page-*.tif" | sort); do
echo $f: $(convert $f -format %P info:)
done
# Gives: real 0m43,240s, user 0m23,273s, sys 0m20,096s
Could be the filesystem cache warming up, or a Youtube video that stopped playing, or just chance.
10% variation isn't too uncommon though if you do not do a carefully controlled benchmark, so this isn't too surprising.
Code: Select all
# Printing the pixel size of each file using -ping
time \
for f in $(find 00-raw-scans -type f -name "page-*.tif" | sort); do
echo $f: $(convert -ping $f -format %P info:)
done
# Gives: real 0m11,971s, user, 0m6,313s, sys 0m5,839s
I would have expected a better speed-up though. Seems like IM is doing a lot more than reading just the width and the height. Well, whatever IM thinks is "reading image characteristics"... okay, noted, I won't be able to improve anything on that, moving on.
Code: Select all
# Listing all scans for each page
time \
for f in $(find 00-raw-scans -type f -name "page-*.tif" -printf "%f\n" | sort | uniq); do
echo $f: $(find 00-raw-scans -type f -name $f | sort)
done
# Gives: real 0m0,509s, user 0m0,352s, sys 0m0,308s
# Again: real 0m0,718s, user 0m0,490s, sys 0m0,426s
It's a bit sluggish for what it's doing, so I'll probably want to be smarter about getting the list of pages, but I guess that's not of much interest to the IM community
Code: Select all
# Printing the bounding box size for each page
time \
for f in $(find 00-raw-scans -type f -name "page-*.tif" -printf "%f\n" | sort | uniq); do
echo $f: $(convert $(find 00-raw-scans -type f -name $f) -layers trim-bounds -delete 1--1 -format %P info:)
done
# Gives: real 0m39,348s, user 0m17,141s, sys 0m22,219s
# Again: real 0m39,298s, user 0m17,135s, sys 0m22,165s
Doesn't seem to be the case anymore. Either because I have enough CPU cores, or because the pixel shoving is offloaded to the GPU.
Code: Select all
# Printing the bounding box size for each page, let's try -ping this time
time \
for f in $(find 00-raw-scans -type f -name "page-*.tif" -printf "%f\n" | sort | uniq); do
echo $f: $(convert -ping $(find 00-raw-scans -type f -name $f) -layers trim-bounds -delete 1--1 -format %P info:)
done
# Gives: real 0m2,175s, user 0m1,230s, sys 0m0,980s
Now that's a nice find.
-
- Posts: 18
- Joined: 2015-09-28T00:39:38-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
Another surprising find:
took noticeably longer than
That was a nice reminder to me that filesystems tend to be optimized more for reading file content than for reading metadata.
Lesson to take home: If image metadata is all you want, do not needlessly retrieve stuff like last-modified or last-accessed times if you want your scripts to be snappy.
Code: Select all
ls -l <file>
Code: Select all
convert -ping <file> -format %P info:
Lesson to take home: If image metadata is all you want, do not needlessly retrieve stuff like last-modified or last-accessed times if you want your scripts to be snappy.
-
- Posts: 64
- Joined: 2017-10-03T10:39:52-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
This is not that inconvenient actually. I have a script that does it (though I'm going to look into the layers method). Just do this:toolforger wrote: ↑2018-02-10T16:43:59-07:00That's certainly more convenient than having to find the max of all widths and heights from `identify` output.I would suggest making a list of your files and process with my layers trim-bounds. Then write a script loop over the same files to extend them using -extent WxH where WxH is the dimensions found from the trim-bounds command.
It's still reading each file twice, which I'd like to avoid if at all possible.
Code: Select all
filew=`identify -format "%w\n" "$dir"/$firstchar*$finalextension | sort -rg | head -n 1`
EDIT: As long as you add the -ping, @fmw42's method is about half the time of mine, with the advantage of reading the data just once.
PS If it doesn't exist already, a nice page of user-contributed solutions to common problems would be great. A lot of stuff is embedded in the Usage pages, but I often find that I'm looking to solve a particular problem and the method for solving it isn't always obvious to me, so I don't know where on the Usage pages to look, and I just end up doing a search which often puts me on StackExchange. Such a thing could be a wiki, perhaps.
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: Expand a stack of files to equal size
Good idea. A place where users could post solutions to problems they've encountered. And users could ask for advice on solving problems, and other users could answer. It could be a forum, of course. Oh, hang on...muccigrosso wrote:Such a thing could be a wiki, perhaps.
snibgo's IM pages: im.snibgo.com
-
- Posts: 64
- Joined: 2017-10-03T10:39:52-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
No, not that actually.snibgo wrote: ↑2018-03-18T10:24:30-07:00Good idea. A place where users could post solutions to problems they've encountered. And users could ask for advice on solving problems, and other users could answer. It could be a forum, of course. Oh, hang on...muccigrosso wrote:Such a thing could be a wiki, perhaps.
A forum is where people are asking questions, often ostensibly about one thing but often really about something else they weren't aware of.
A wiki can head off questions to a forum; it doesn't replace it, nor does the forum replace the wiki.
My own experience is that it can be difficult to find particular answers to my how-to questions in both the Usage pages and this forum. There are a number of reasons for that (including the lack of an easy search function on the Usage pages, unless I'm missing it). But also because sometimes things turn up in unexpected places and it's hard to search for them.
For example, this thread is titled "Expand a stack of files to equal size",yet we find:
1. The questioner's stack is not what IM means by stack
2. The ultimate task is increasing the size of some images in a group
3. Key to the process is efficiently finding the largest image in that group
Things like this mean searching the forum for how-to things is often frustrating and tedious. (Again, it happens to me all the time.) Likewise the Usage pages are organized by topic, but it's not always obvious where to search for things (like how to efficiently find the maximum size of a group of images).
With a wiki (or GitHub even), it's easy enough for users to contribute to this kind of thing, even if it means just sending people to existing stuff (like some of the Usage pages).
Fora are great, but allowing users to contribute to a wiki or manual is great too. Such a thing could easily be managed on GitHub, for example.
-
- Posts: 12159
- Joined: 2010-01-23T23:01:33-07:00
- Authentication code: 1151
- Location: England, UK
Re: Expand a stack of files to equal size
I'm not arguing against a Wiki. If someone wants to create (and moderate) one, great.
No, the Usage pages don't have a search function. Personally, I use https://duckduckgo.com/ (other search engines are available) with a search term of "ImageMagick" plus suitable specific keywords. This usually finds the answer in the Usage pages or these forums or stackoverflow etc.
No, the Usage pages don't have a search function. Personally, I use https://duckduckgo.com/ (other search engines are available) with a search term of "ImageMagick" plus suitable specific keywords. This usually finds the answer in the Usage pages or these forums or stackoverflow etc.
snibgo's IM pages: im.snibgo.com
- fmw42
- Posts: 25562
- Joined: 2007-07-02T17:14:51-07:00
- Authentication code: 1152
- Location: Sunnyvale, California, USA
Re: Expand a stack of files to equal size
There was someone who had volunteered to rewrite the documents page into some form like that which would be searchable. But it has been a year or more and I have not heard anything further.
-
- Posts: 64
- Joined: 2017-10-03T10:39:52-07:00
- Authentication code: 1151
Re: Expand a stack of files to equal size
Have you thought about putting them on GitHub and just taking PRs? I love them, but they could use a little TLC in the form of CSS and in the places where they're missing info (usually noted).