snibgo wrote:AOne approach: for this example, we need to examine horizontal positions only. Try overlap with offsets +1, +2, ... +width-1. At each position, "-composite difference". The best match is where the result is darkest.
Hmmm that sounds more like a sub-image search using itself!!!!
Assuming the tile pattern is smaller than 50% of the input image (it would be pretty bad if it wasn't)...
Code: Select all
convert "Castillo 001.gif" \( +clone -crop 50%x+0+0 \) miff:- |
compare -subimage-search - miff:- |
convert - -delete 0 tile_search_result.png
The first just extracts a quarter image. then it is compared at every position over a quarter of the image, and produces a similar map. The last just saves the second (map) image.
Resulting in...
NOTE: This is VERY slow. But even so, on my 8 core machine completed in -- 11.5 seconds!
WARNING: the image for some reason contains a random transparency! This is a bug, as the gray scale map should not contain any transparency. I re-processed the above image to simply turn off alpha channel.
The result is also not just a simple difference of grayscale values, but the length of the color different vector (RMSE), as such it should work well even for images with little grayscale variance.
You can get the same result with "composite Difference" using a difference of individual color channels, which is then squared, added and square-root.
For large image compares a FFT convolution would be better. Fred, can you provide an equivalent FFT example?
One point. this time determination only determined the size and relative offsets of the tile pattern. It does NOT try to center the main part (high entropy) of the tile in the center of the tile, so that the low entropy areas (background areas) are used along the borders of the tile.
Extracting the tile appropriatally would be the next step. As would any rotation correction, and edge blending
Hmmm.... Do I hear a script in the making here... Fred?