Bit depth for 32 bit TIFF?

Questions and postings pertaining to the usage of ImageMagick regardless of the interface. This includes the command-line utilities, as well as the C and C++ APIs. Usage questions are like "How do I use ImageMagick to create drop shadows?".
Post Reply
mikjsmith
Posts: 8
Joined: 2016-04-02T23:46:59-07:00
Authentication code: 1151

Bit depth for 32 bit TIFF?

Post by mikjsmith »

Hi

I have three RGB 16 bit TIFFs which I converted to 32 bit as per (for each)

..\convert 02.tif -depth 32 test3202.tif

I then used evaluate-sequence to take the mean of each pixel over the set of three as per

..\convert *.tif -evaluate-sequence mean test.tif

I then did the same evaluation for the three original 16 bit TIFFs and then finally looked at a single pixel in each of the two resultant files as per

..\convert test.tif -format '%[pixel:p{40,30}]' info:-

Both showed *exactly* the same floating point values. Whilst I think IM normalises between 0 and 1, I was expecting to see a greater number of significant figures.

Is there any way of showing the exact value stored in a pixel? And does my workflow make sense in terms of an expected increase in the significant figures and so bit depth?

thanks

mike
snibgo
Posts: 12159
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: Bit depth for 32 bit TIFF?

Post by snibgo »

The default precision is 6 significant digits, so you won't see a difference between depth 16 and 32. Try "-precision 20".

Please read the IMPORTANT: Please Read This FIRST Before Posting thread.

Are you using Q16 or Q32? HDRI?
snibgo's IM pages: im.snibgo.com
mikjsmith
Posts: 8
Joined: 2016-04-02T23:46:59-07:00
Authentication code: 1151

Re: Bit depth for 32 bit TIFF?

Post by mikjsmith »

Apologies for missing that. Version:
Version: ImageMagick 6.9.1-2 Q16 x86 2015-04-14

Will this have an impact?

For this application the 16 bit TIF has come from PTGui (which aligned the 3 images, but did *not* create an HDR). The identify command reports them as 16 bit - after conversion they are noted as 32 bit.

After the evaluate-sequence, changing the precision to 20 reports more sig figures, but again the 16 and 32 bit versions are exactly the same. Can I get at the underlying pixel values themselves to double check? But Im guessing the problem lies elsewhere
snibgo
Posts: 12159
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: Bit depth for 32 bit TIFF?

Post by snibgo »

"-precision" sets the format of outputted text, for display. It doesn't affect the underlying calculations.

You are using Q16. You haven't said whether HDRI, so I'll assume not. This means that pixels are stored in memory using 16-bit integers for each channel, for each pixel. Calculation done on the values will be more precise, but the results are rounded to the nearest 16-bit integer.

Then you can write the files as 32-bit integer, but that can't recover lost precision. If you want 32-bit precision, get Q32.

For the exact values stored in pixel channels, I use a process module called "dumpimage". See my "Process modules" page.
snibgo's IM pages: im.snibgo.com
mikjsmith
Posts: 8
Joined: 2016-04-02T23:46:59-07:00
Authentication code: 1151

Re: Bit depth for 32 bit TIFF?

Post by mikjsmith »

Thanks very much - quick learning curve here! Q32 it is - Ill switch to Linux.

I dont have HDRI enabled (just read up on that) - how does this change processing?? I assume that by having a 32 bit TIF (under Q32) it becomes an RGB 96 bit image with calculations as floating point. What is different by enabling HDRI??

thanks so much for the help. Will also look at dumpimage

cheers

mike
snibgo
Posts: 12159
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: Bit depth for 32 bit TIFF?

Post by snibgo »

Whether or not you have HDRI, most calculations are performed with floating point.

With HDRI, pixel values are also stored as floating point, so they are not rounded to integers. There may still be rounding, as calculations may use a more precise floating point type than is used for storage. The last time I checked this, the HDRI varieties Q8, Q16, Q32 and Q64 used storage with 4 bytes, 4 bytes, 8 bytes and 16 bytes respectively. Q64 build are always HDRI. The other Q-numbers can be built as either integer or HDRI.

HDRI and larger Q-numbers does slow down processing. By how much? It varies according to what processing is done. For "typical" processing, I reckon Q32 HDRI takes about 50% more time than the fastest, which is Q8 integer.

Most of my own work is with Q16 integer, but displacement maps often need Q32. Sometimes I need HDRI.
snibgo's IM pages: im.snibgo.com
Post Reply