

In both programs the auto selection of the optimum processing technique selects the CPU even though it’s not the fastest method on my system.

In Sharpen AI it’s either one or the other, but not both. Gigapixel AI lets you select both CPU and GPU at the same time. The two programs do handle things a bit differently from each other. Makes you wonder about the possibility that the GPU memory designations were mislabeled in either Gigapixel AI, or Sharpen AI. I ran some speed checks on the new version of Sharpen AI that now uses OpenVINO and that behaved the way one would expect. I agree, but I’ve checked it several times and that’s how Gigapixel AI seems to do it less memory is faster. It seems odd that setting GPU memory to Medium ran much faster than using high memory setting. The banding is passable in this case because it looks enough like rock layers (though the slope does not match the strata exactly), but this noticable of an artifact might ruin any other image.Įither version is outstandingly better (sharper / more detail) than simply enlarging the original 4x in Photoshop. There was also less micro detail, and some visible horizontal banding you can see on the rock faces with the ones processed using GPU On. The histograms for the two that had GPU On showed that their darkest blue values were shifted slightly brighter, but otherwise their shapes matched the original. The two with GPU Off had more contrast than both the original and the ones with GPU On.

I then did overlays using layer difference in Photoshop and discovered that the two images with GPU Off were identical, and the two images with GPU On were identical. I just ran some speed and quality tests on 4.4.1.
