Page 1 of 1

Optimizing Similar Progressive Images?

Posted: 2007-09-26T19:26:11-07:00
by XeonXT
I would like to optimize the streaming of pictures over an internet connection for maximum speed and lowest bandwidth usage. Many of these images will be similar. I cannot simply compress them all to an archive to save space, because they will be streaming. I was wondering...would it be possible to save bandwidth by sending only an image file that contains the difference in the current and last images and then the receiving end could simply combine the last image and the difference to get a new one?

I'm thinking that maybe the compare command could be used to do this, but I'm not quite sure how. From the tests that I have done, the compare generated an image that was almost as large as the original...it included a faint background of the original image and highlighted the changed places in red, which really did me no good.

So, to put it simply, is it possible to acheived this:

Difference between ImageA and ImageB -> ImageC
Use ImageA and ImageC to reconstruct -> ImageB

Generally wanting ImageC to be < the size of ImageB (else it would be pointless to do this).

Thanks for your help! Awesome program by the way!

Re: Optimizing Similar Progressive Images?

Posted: 2007-09-26T19:54:40-07:00
by magick
Most of what you describe is accomplished with the MPEG-encoder. Try converting your images to MPEG and stream the MPEG image stream over and then the receiver converts the image back to the original format. If you want a lossless method you could try a difference operator and the resulting image should be highly compressable if the original images were similar.

Re: Optimizing Similar Progressive Images?

Posted: 2007-09-26T21:09:14-07:00
by XeonXT
magick wrote:Most of what you describe is accomplished with the MPEG-encoder. Try converting your images to MPEG and stream the MPEG image stream over and then the receiver converts the image back to the original format. If you want a lossless method you could try a difference operator and the resulting image should be highly compressable if the original images were similar.
I'm afraid I don't follow. I thought MPEG was a video format? Since I need to stream single images, not video, I'm not sure if I really want to deal with that kind of conversion. Please understand that this optimization will probably only shave a half second at best off of my operation...my programs are already extremely optimized, however I am striving for perfection.

This second method you speak of...can ImageMagick do this as well? Or are you talking about something I would need to integrate into my native code to find the difference?

Re: Optimizing Similar Progressive Images?

Posted: 2007-09-26T21:21:39-07:00
by XeonXT
OK after doing some more research I've found what you meant by the difference operator. I'm having a bit of trouble implementing though:

Code: Select all

//First get the difference. This works fine.
convert.exe dcmp2.jpg dcmp1.jpg -compose difference -composite difference.jpg

//What operation to combine the difference and first image?
convert.exe dcmp1.jpg difference.jpg -compose screen -composite recon.jpg
So I don't really know what I should be using to reconstruct the image. Doing another difference didn't work...screen is about the closest I've gotten to accuracy, but it still isn't perfect.

Thanks for your help!

Re: Optimizing Similar Progressive Images?

Posted: 2007-09-27T18:18:11-07:00
by anthony
Take a look at the GIF animation Basics page in IM Examples.

There are two optimizations that can be done from one image to the next.

First you can use the new -compose ChangeMask
See http://www.imagemagick.org/Usage/compose/#changemask
to make any pixel that does not change the 'current' image by more than the current -fuzz setting transparent.

that will make anything that is unchanging all teh same color, allowing better compression.

Second, you can the -trim the resulting image to produce an image with and 'offset' so as to reduce the size of the image overlay and thus the overall amount of data for the next image.

Both of these are aspects of normal GIF animations, which is actually very 'stream' like within it image file format. You don't have to follow the 'GIF' format with its restrictions, but you could send one image at a time in the same manar as how GIF image animation is handled.

NOTE there are some GOOD optimizations that can be use but requires you to not display the new image when a 'zero' delay is ineffect. That is the next frame is really a optimized composite of two or three individual image updates.

I would read and understand GIF animation, then proceed from there, with sending one image at at time with time and disposal methods.

Re: Optimizing Similar Progressive Images?

Posted: 2007-09-27T19:06:38-07:00
by XeonXT
Thanks for the help! However, my question still persists:

It seems that using the difference mask would be a great way to stream...is there not a way that I can reverse the mask to reconstruct my image once I send the difference image over? Please read my last post and maybe that will explain it a bit more.

In the mean time I will be researching GIF animation as you suggested :)

Re: Optimizing Similar Progressive Images?

Posted: 2007-09-27T19:24:26-07:00
by anthony
In image reconstruction you alway have the 'displayed image', and you would have just recieved the changes to the image. just overlay them (accoring to any disposal method you have implemented).

Research GIF, you will get the idea. Especially look at -coalesce which basically
recreates the displayed frames from the posibly incremental changes.