Page 3 of 3
Re: Extract a region of an huge jpeg
Posted: 2011-03-13T03:12:07-07:00
by alex88
anthony wrote:
Code: Select all
jpegtran -crop 100x100+123+425 -copy none huge.jpeg crop.jpg
convert crop.jpg -gravity SouthEast -crop 100x100+0+0 +repage crop_fixed.png
Are the same arguments used for jpegcrop?
If so that may be the better suggestion, so as to avoid confusion with the multiple versions of jpegtran.
I've linked the jpegtran version used, bytheway, why convert then to png? Also, jpegtran uses about 600mb ram cropping that 12000x12000 that's too much, in fact i get out of memory when trying to crop that 30000x30000 and also another picture self made with photoshop of the same size.
anthony wrote:The image...
http://rvvs89.ucc.asn.au/stuff/huge/huge4.jpg doesn't work with Firefox either
It reports... The Image {...} cannot be displayed, because it contains errors.
Perhaps it is again because it is too big! But it is hard to say for ceratin.
It has some problems yeah, but i've also linked a normal gradient image made with photoshop.
http://www.megaupload.com/?d=9ZI5UNEQ
anthony wrote:Hmmm.. skipping will be the way to go for handling HUGH images. However it does mean the image must then be disk based and can not be comming from a network stream or pipeline. That is the tradeoff. It may be why the previous commands still reads the whole file, even if it is not storing it all into memory.
Well sure i was talking about hdd stored images.
anthony wrote:Question is the original JPEG Club people still active. Those pages and utilities are actually quite old. If they are active perhaps they would like to help.
It seems to be active, or at least
http://www.ijg.org/ is active, last version of their software is 16-Jan-2011 so pretty updated. I've sent an email to Guido Vollbeding asking if is possible. Also check out my comment
here, i've asked:
You think is this possible just taking a part of the file and decode that?
and his reply:
In general, no, it is not possible to decode just part of a file to decode (unless restart markers are inserted in the stream).
But the possible restart markers are at max 7, so it not so useful.
Re: Extract a region of an huge jpeg
Posted: 2011-03-13T04:47:20-07:00
by anthony
That is a shame. So regardless you need to decode the compression to just find the right blocks to extract.
As for what the IM convert to PNG. it is because the image generated by a jpegtran may not be exactly the right size, because of the change in the top-left boundaries I would assume there are some extra pixels, so I use IM to do the final crop, the the exact size wanted. As this step requires actual decoding the the frequency spectrum (where the 'lossy comporession' occurs) I did not want to save back to JPEG and incur a second round of lossy compression.
Hmmm.. It would be nice is stream could be modified to extract multiple tile-crops from a single large image. That way you only need to go though the image once to get all the tiles. Of course stream would then need to take a whole row of output image tiles open simultaneously, but doing in one stream pass would be a major time saving!
Also I have not seen a stream method of taking a whole set of tiles (or just one and merging it) so as to reform the original large image.
Re: Extract a region of an huge jpeg
Posted: 2011-03-13T04:52:28-07:00
by alex88
Yeah but that won't be don in parallel way, that's my problem. :/ Hope Guido responds fast.
Re: Extract a region of an huge jpeg
Posted: 2011-03-15T01:26:19-07:00
by alex88
This is my conversation with guido from the jpeg group:
me wrote:Hi, sorry for the direct mail.
I'm trying to crop (extract) a part of a big image, with the minimum
resource usage. I've tried lot of solutions but i haven't found one
working.
I've started my tests with a 12000x12000 pixels image,
http://goes.gsfc.nasa.gov/pub/goes/0809 ... poster.jpg
My java program slice the image in portions of a determinated size,
and takes 200ms at the beginning (top left of the image) but increases
to 15secs each (bottom right of the image).
So i've tried
http://jpegclub.org/jpegtran/jpegtran.exe with this command line:
.\jpegtran.exe -crop 100x100+0+0 .\080913.ike.poster.jpg output.jpg
and it works in 1251 ms, that's nice!
But when i switch to a 30000x30000 image
http://rvvs89.ucc.asn.au/stuff/huge/huge4.jpg (i should have it
working with even more larger images, kinda 100k x 100k px) it gives:
Insufficient memory (case 4)
even if i set maxmemory to 500000.
Also with the 12000x12000 image it takes up to 600mb of ram, like it
decodes the whole image to grab just a portion of it. Is there a
better way to do that?
Waiting for a response,
Best Regards
Guido wrote:Yes, jpegtran works this way by decoding the whole image into
memory. It is not optimized for such huge images.
There is a possibility to use temporary files on storage instead of
RAM by selecting an appropriate memory manager module when building
the library (e.g., jmemansi.c instead of jmemnobs.c).
This might help to avoid insufficient memory cases, but will cost
run time and storage space.
Kind regards
Guido Vollbeding
Organizer Independent JPEG Group
me wrote:Thank you for the fast answer,
So i think that way will still require lot of seconds to run, like the
stream command of the imagemagick bundle.
I've read something about the huffman tables and how jpeg is encoded,
but i'm not sure in one thing.
Is possible to skip directly to the start of a region i want to
extract (multiple of 8 if that make it easier) and decode only what
i'm interested to extract?
Regards
Guido wrote:There are many possibilities.
The more you want it optimized, the more you need to tweak the code.
The simplest case is in decoding you can skip over outside region
with jpeg_read_scanlines() without storing to huge memory array.
The IDCT is still done, so it doesn't save time, but it can
save you memory.
Doing IDCT could be avoided by tweaking code, but Huffman decode
still needs to be done to some extent (similar to downscaling case).
Transcoding for this purpose is harder to tweak because there are
currently only interface functions for full image coefficient array
decode/encode (jpeg_read_coefficients()/jpeg_write_coefficients()).
So there are possibilities, but you probably have to dig fairly
deep into the code for adaption.
Regards
Guido
Seems possible, what you guys think?
Re: Extract a region of an huge jpeg
Posted: 2011-03-15T03:24:36-07:00
by anthony
It seems posible, but as mentioned you will probably need to decode the whole stream to get the bottom region wanted, even if you don't store everything.
This is why I think a JPEG decode that spilts the JPEG image into multiple tiles (with each tile division on 8 or 16 pixel block boundaries) is the way to go. The stream is decoded once and when finished you have the image in smaller manageable tiles.
To get a specific region you only need to montage and crop the appropriate set of tiles.
However as guido mentioned, there is probably no exting utility to do this at this time.
Also any such utility would need to be recording to a multiple output files simultaniously, Probably one for the number of tiles in an image. Also at the start of each tile you will need to duplicate the appropriate header information, and at the end of every tile the appropriate footer (post image data) information found in the original image.
It would however be a very very useful utility, especially if its reverse (tile joiner) could also be created.
QUESTION: how did a 30000x300000 pixel JPEG image get created in the first place if most programs are insufficient memory?
Re: Extract a region of an huge jpeg
Posted: 2011-03-15T05:27:22-07:00
by alex88
anthony wrote:QUESTION: how did a 30000x300000 pixel JPEG image get created in the first place if most programs are insufficient memory?
I've done it with photoshop, quite smooth also. Probably they're more practice with memory managment