Thank you shuntcap!
I did a quick rundown between Stock camera app, Opencamera and your port of gcam. Needless to say that gcam blew the competition out of the water.
Stock camera and Opencamera photos are ~identical to me (atleast on default settings)
One thing I noticed is that gcam photos are "only" 5.9 Mpix in size. And on some very fine details with good light conditions I can interpret more info from stock camera apps 23.8Mpix photo. (yes, I've read this thread and discussion about Cosmo's Samsung made camera sensors true capabilities. But I was left in impression that the stock app (api) is just doing something wrong and there could be more information to be fetched from the sensor, right?
But generally most of the things are so much better on gcam's photos that I've switched completely to that (atleast for now)
This is interesting, and I think it has to do with why companies can claim a camera is x megapixles when the sensor isn't actually capable of shooting at that resolution. I'm pretty vague on this - and I'm sure someone here can correct me if / where I am wrong - but the way that I understand it is the camera chip can "create" pixels to a fair degree of accuracy. For example, if the sensor picks up a red pixel next to another red pixel, the chip can create another red pixel between them with a reasonable assumption that it is accurate. Obviously, things are more complicated when different colour pixels are next to each other, but I believe it does some fancy algorithms to work out what pixel it should create. I think this is called interpolation. Gurus on this forum - am I correct or completely off the mark?