Page 4 of 5
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-09T21:22:49-07:00
by snibgo
I said above that:
snibgo wrote:Cropping each of 1920 frames into vertical strips is easy. Appending 1920 images together is easy. Doing both operations in one command isn't easy.
Thinking further, even if it was easy, I don't think it would be wise. This "cumulative append" would require writing and reading the same data over and over. On average, each cumulative file would be 1920/2 pixels wide, and we have 1920 of them, and have to open/close each one 1920 times. This isn't much better than reading every frame 1920*1920 times.
I should also say: my examples above use ".miff". ".mpc" may be faster. On my laptop, for small files, ".miff" is faster. I suppose the overhead of the second file cancels other benefits. But trials should be done.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-09T21:49:48-07:00
by fmw42
My experience is that miff is faster unless you are going to read the same file many many times. There seems to be a time penalty to write to mpc. Though I have tested with very large images.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-11T11:43:30-07:00
by killmoms
Hmm. Now I have a new, mysterious problem. Here's the code I came up with (basically resembles your pseudo code from earlier, snibgo):
Code: Select all
#!/bin/bash
# ImageMagick Tests for mbarcode.sh
FILES=($(find "${1}" -type f | egrep -i "\.jpg$" | sort))
i=1
echo -e "Dumping MIFFs..."
for f in ${FILES[*]}; do
framedir="f_$(printf %04d $i)"
mkdir -p tempcrops/${framedir}
convert ${f} +gravity -crop 1080x1 tempcrops/${framedir}/${framedir}_c_%04d.miff
(( i++ ))
done
DIRS=($(find "tempcrops/" -type d | sort))
echo -e "Assembling barcodes..."
for (( c=0; c<14; c++ )); do
echo "$(printf %04d $((${c}+1))) of 1920"
for d in ${DIRS[*]}; do
COLUMNS=($(find tempcrops -name *$(printf %04d ${c}).miff))
convert ${COLUMNS[@]} +repage -append -rotate -90 PNG24:barcode_$(printf %06d ${c}).png
done
done
rm -rf tempcrops
The problem I have is that, while the frame dumps are full-color JPEGs that are perfectly fine, now I'm getting grayscale gray MIFF files, so when it goes to assemble my PNGs they're lacking color information. I don't know where I'm messing this up—why would a convert/crop operation on true-color (8bpc RGB) source images yield grayscale MIFFs when no color conversion is specified?!
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-11T12:21:44-07:00
by snibgo
Weird. I can't see what would make it gray. You may have said, but I can't see, what version of IM? On what platform?
Code: Select all
convert ${f} +gravity -crop 1080x1 tempcrops/${framedir}/${framedir}_c_%04d.miff
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-11T12:54:51-07:00
by killmoms
Code: Select all
Version: ImageMagick 6.8.9-1 Q16 x86_64 2014-07-11 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2014 ImageMagick Studio LLC
Features: DPC Modules
Delegates: bzlib djvu fftw fontconfig freetype gslib jng jpeg lcms ltdl lzma png ps tiff webp x xml zlib
On OS X 10.10 Yosemite DP3. That said, I'd tried to configure from MacPorts with Q8 before so that might've messed something up. I decided to port uninstall and port install again with no extra options defined. In any case, after I did that, I started playing around with manual tests and for whatever reason my MIFFs started coming out correct. I'm not sure what changed, but I'll take it.
That said, I've modified my code a bit, because I did a quick test with multi-page MIFFs (rather than a ton of individual files, because HFS+ filesystem performance with generating tons and tons of little files is notoriously bad, plus fseventsd really doesn't like it) and it was FAST AS HECK. It seems the MIFF format is very kind to accessing specific pages. I mean, it was with only 46 frames, but it seemed so much faster than before that I'm currently doing a test on a full-size barcode (1920 frames).
Code: Select all
FILES=($(find "${1}" -type f | egrep -i "\.jpg$" | sort))
i=1
echo -e "Dumping MIFFs..."
for f in ${FILES[*]}; do
framedir="f_$(printf %04d $i)"
mkdir -p tempcrops
convert ${f} +gravity -crop 1080x1 +repage tempcrops/${framedir}.miff
(( i++ ))
done
FILES=($(find "tempcrops/" -type f | sort))
echo -e "Assembling barcodes..."
for (( c=0; c<1920; c++ )); do
echo "$(printf %04d $((${c}+1))) of 1920"
columns=()
for f in ${FILES[@]}; do
columns+=("${f}[${c}]")
done
montage ${columns[@]} -mode Concatenate -tile 1x miff:- | convert - -rotate -90 barcode_$(printf %06d ${c}).png
done
rm -rf tempcrops
We'll see if this maintains speed at "full scale" (doing full-width barcodes and the full number of them).
EDIT: Yep, this is significantly faster—about 12 seconds per barcode, rather than the minute and a half it was taking before, and with a lot less I/O activity (which verifies that reading specific pages from multi-page MIFFs doesn't require reading the entire file). That's still 6.5 hours to do a full 1920 barcodes, but that's a heck of a lot better than 2.5 days. I might look into using netpbm and that Perl script to output the proper sort of MIFF straight from ffmpeg for animation mode, or otherwise optimizing bits where there's some waste, currently. But I think this is enough of an improvement that I'm done for now.
Thanks to both of you for helping out! I really appreciate it.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-11T16:53:58-07:00
by killmoms
Well, never mind. After running the script for a while, I noticed it'd slowed down considerably (19s per barcode by the time it was into the mid-40s, when I re-launched it after a tweak), and my "data read" stat in Activity Monitor had climbed into the "200GB" range. Clearly reading specific columns doesn't involve jumping to an offset first. I feel like MPC + stream might be the only solution to this problem. Which means I really need to figure out how to get the Q8 version of IM installed, because otherwise I'm really gonna be out of disk space QUICK.
EDIT: MPC + stream was a bust (it seems to output 0KB files every time I try it, though maybe I'm just not thinking through it correctly), but PNM + stream seems to be good. Now 67 barcodes into a 672 barcode set and each barcode is still taking about 11 seconds to make. We'll see if this holds up, but I did some quick tests by artificially setting the stream extract to pull from the last few rows of the images (AKA making the last few barcodes of the full set without making all the early ones first) and it was still going quickly, so hopefully this is the solution.
EDIT 2: Well, now I'm totally confused. After what seemed like a successful trial last night on the a 1280-slice SD-resolution barcode animation (the aforementioned 672 barcode set), I tried again today with a full 1920-slice HD-resolution animation. Suddenly Activity Monitor was showing SSD read activity through the roof—right around 200MB/s on average. Does stream not actually just seek to an offset based on the requested extract/input file size? Was I somehow just missing disk activity because with the smaller files (1MB instead of 6.2MB), stream was launching and closing fast enough that its disk I/O wasn't being picked up on and reflected in Activity Monitor? Am I out of luck without the help of a much more savvy, lower-level language? Might post in the Developers forum and reference this thread to see if anyone there has a suggestion.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T11:24:39-07:00
by fmw42
FILES=($(find "${1}" -type f | egrep -i "\.jpg$" | sort))
i=1
echo -e "Dumping MIFFs..."
for f in ${FILES[*]}; do
framedir="f_$(printf %04d $i)"
mkdir -p tempcrops
convert ${f} +gravity -crop 1080x1 +repage tempcrops/${framedir}.miff
(( i++ ))
done
FILES=($(find "tempcrops/" -type f | sort))
echo -e "Assembling barcodes..."
for (( c=0; c<1920; c++ )); do
echo "$(printf %04d $((${c}+1))) of 1920"
columns=()
for f in ${FILES[@]}; do
columns+=("${f}[${c}]")
done
montage ${columns[@]} -mode Concatenate -tile 1x miff:- | convert - -rotate -90 barcode_$(printf %06d ${c}).png
done
rm -rf tempcrops
I do not see how stream would help this code. Your loop is opening each input image only once. But it is opening and writing to each of 1920 (growing) miff files for each convert. That may be why it slows down. But I do not see how you could make it any more efficient apart from writing out each row and then later assembling, even though you will have a tremendous number of row images.
Seems to me that you are going to lose one way or the other with reading the input images over an over or writing to many outputs over and over.
However, MPC format might help you extract one row from each image for a given loop. That is read each image for each row you need to extract. You pay a slight penalty to convert all your input images to MPC format, but then reading them to extract a row should be fast.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T11:31:22-07:00
by killmoms
Yeah, that's outdated code at this point. Currently my test script looks like this:
Code: Select all
#!/bin/bash
# ImageMagick Tests for mbarcode.sh
mkdir -p tempcrops
echo -e "Dumping PPMs..."
ffmpeg -an -sn -i ${1} -r 0.25 -vf "transpose=1" -pix_fmt rgb24 -f image2 -vcodec ppm tempcrops/f_%05d.ppm &> /dev/null
mkdir -p anim
FILES=($(find "tempcrops" -type f -name *.ppm | sort))
echo -e "Assembling barcodes..."
for (( c=0; c<1920; c++ )); do
echo "$(printf %04d $((${c}+1))) of 1920..."
for f in ${FILES[@]}; do
stream -map rgb -storage-type char -extract 1080x1+0+${c} ${f} -
done |\
convert -depth 8 -size 1080x${#FILES[@]} rgb:- -rotate -90 anim/barcode_$(printf %05d ${c}).png
done
rm -rf tempcrops
I'm no longer opening convert many times—instead I'm piping in raw pixels from stream. If this is slower than opening a single file, I could instead >> to a raw temp.dat file within the loop and then open that with convert instead (and clear it before iterating through the parent loop again). Either way, the problem right now is reading the input without having to load the whole file. I tried MPC for the source frames, but trying to stream -extract from that always returned zero KB of output (but, weirdly, no error messages), so I'm not sure
what was happening there.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T11:50:11-07:00
by fmw42
I don't think MPC is a streamable format. You need PPM or MIFF.
With MPC, I was not trying to stream, but to allow the input image in mpc format to be opened to crop one row multiple times without paying too much penalty for opening the image. So I was expecting you to go back to your earlier code and just loop over each image to get the relevant row and put them together for a barcode. The repeat for the next row, reading each mpc image again.
From
http://www.imagemagick.org/Usage/files/#mpc
"However the file does not need to be 'read in' or 'decoded' but can be directly 'paged' into computer memory, and used exactly as-is, without any processing overhead. Only lots of disk space and disk IO. In other words it only needs disk access time to read, without any file format processing. That is no decoding of the data needed."
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T11:57:10-07:00
by killmoms
Yes, and I am currently using PPM—it seems to still need to read the entire input file off the disk in order to stream-extract from a given position though.
It's the "lots of disk space and I/O" that I'm trying to fight—unless MPC allows for "paging in" only the portion identified by crop (without having to read any of the rest of the file) on open, in which case, maybe that would work? If that's the case, though, I'd need some help getting ImageMagick Q8 going, since the Q16 version I've got now takes 16.6MB per 1920x1080 8bpc frame, vs. the 6.2MB each binary PPM file takes.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T12:17:28-07:00
by fmw42
I would test the speed in Q16 with MPC and see if that even works, before trying to install Q8.
As I recall you are on a Mac and using MacPorts. If you need to install Q8, I do not know how to do that with MacPorts, but I can show you how to use all your delegates from MacPorts and install IM from source so that you can configure however you want. Let me know if you want to do this.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T13:30:47-07:00
by killmoms
No dice—using convert -crop (and outputting to miff on stdout) on a bunch of MPC files just results in reading them completely off the disk as well.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T15:16:28-07:00
by fmw42
killmoms wrote:No dice—using convert -crop (and outputting to miff on stdout) on a bunch of MPC files just results in reading them completely off the disk as well.
It should be faster if you have already converted your images to MPC and have them saved. I hope you were not converting them in your loop to MPC each time you loop over them.
The whole point of MPC is to have little overhead reading them multiple times once you have them saved in MPC format.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T15:28:40-07:00
by killmoms
Yes, I had pre-converted them. But no, it was not faster—it was slower, since it was having to read more data in. Unless using convert -crop to load them is incorrect, and there's some other operation I should be using that's more efficient. The thing is, I'm not loading one image over and over: I'm loading one, cropping out a single row of pixels, and then moving on to the next one. On the next loop through I'm coming back and starting over. That's why I was hoping for a format/method that didn't have to read the entire file off disk to pull a specific row of pixels, but could instead jump to an offset and read that row from disk directly.
Re: Speed up “movie barcode” generation w/ IM
Posted: 2014-07-12T16:11:54-07:00
by fmw42
Was it faster than starting with PPM or PNG or whatever your original format was for the same code? It should be. That was the point of MPC.