Browser realtime transformations
Posted: 2019-08-13T10:25:02-07:00
Hello, I just want to share some work I've been doing transforming images in web pages, as fast as possible, to see if a "real time" effect can be accomplished while users hover the mouse over images or acting as a video web-cam filter. IM exceeded my expectations
* demo: https://cancerberosgx.github.io/demos/magica/canvas/ (for some reason, in chrome, barrel example needs to be executed aprox 100 times to get faster)
* demo video for impatients https://cancerberosgx.github.io/demos/m ... s-demo.mp4
* Tested on Chrome and Firefox.
* using my library (WebAssembly port) https://github.com/cancerberoSgx/magica
* although working, the video thing doesn't work nice yet - adding web worker support for that
* If you want to play more with IM in the browser the library's playground could be interesting too https://cancerberosgx.github.io/demos/m ... layground/
What I'm currently doing is:
* keeping the input images as .miff (since I believe is the fastest format for read operations)
* and transforming to .rgba output. This is because I can write the content directly in an HTML canvas (ImageData) without processing.
* Use -verbose and parse stdout to get output size which could change.
Theses are the current library's IM build capabilities/features (it's built mostly from ImageMagick github repos / master):
Ideas I will try:
* try with quantum depth 8 and no hdri
* try with minimal features / delegates. replace zlib with zstd ?
* try with mpc instead miff (is there anything faster usable from the CLI?)
* use web workers which currently don't
* Notice that this library uses imagemagick command line interface and not using C/C++ APIs which probably would be faster. I don't think it's currently great overhead since input file is only written once in memory using ArrayBuffers which are optimal for this / transfer to web workers. However, I wonder if ImageMagick utilities (convert) are reading input files optimally or doing some kind of buffering/preprocessing which could make sense for hard drives/large images but not so in small memory images (?)
* probably there could be some low level hacks (like updating only the dirty region in the canvas, or emscripten / libraries advanced compile settings/configuration. Nevertheless I'm not interested on micro-tunning the performance but more on the theory behind commands and validating with advanced users if my approach is optimal (miff, rgba, commands)
If anybody has suggestions for incrementing speed or ideas they are most welcome! thanks
* demo: https://cancerberosgx.github.io/demos/magica/canvas/ (for some reason, in chrome, barrel example needs to be executed aprox 100 times to get faster)
* demo video for impatients https://cancerberosgx.github.io/demos/m ... s-demo.mp4
* Tested on Chrome and Firefox.
* using my library (WebAssembly port) https://github.com/cancerberoSgx/magica
* although working, the video thing doesn't work nice yet - adding web worker support for that
* If you want to play more with IM in the browser the library's playground could be interesting too https://cancerberosgx.github.io/demos/m ... layground/
What I'm currently doing is:
* keeping the input images as .miff (since I believe is the fastest format for read operations)
* and transforming to .rgba output. This is because I can write the content directly in an HTML canvas (ImageData) without processing.
* Use -verbose and parse stdout to get output size which could change.
Theses are the current library's IM build capabilities/features (it's built mostly from ImageMagick github repos / master):
Code: Select all
Version: ImageMagick 7.0.8-60 Q16 x86_64 2019-08-06 https://imagemagick.org
Copyright: © 1999-2019 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Cipher HDRI
Delegates (built-in): fftw freetype jng jp2 jpeg lcms png raw tiff webp zlib
Ideas I will try:
* try with quantum depth 8 and no hdri
* try with minimal features / delegates. replace zlib with zstd ?
* try with mpc instead miff (is there anything faster usable from the CLI?)
* use web workers which currently don't
* Notice that this library uses imagemagick command line interface and not using C/C++ APIs which probably would be faster. I don't think it's currently great overhead since input file is only written once in memory using ArrayBuffers which are optimal for this / transfer to web workers. However, I wonder if ImageMagick utilities (convert) are reading input files optimally or doing some kind of buffering/preprocessing which could make sense for hard drives/large images but not so in small memory images (?)
* probably there could be some low level hacks (like updating only the dirty region in the canvas, or emscripten / libraries advanced compile settings/configuration. Nevertheless I'm not interested on micro-tunning the performance but more on the theory behind commands and validating with advanced users if my approach is optimal (miff, rgba, commands)
If anybody has suggestions for incrementing speed or ideas they are most welcome! thanks