I'm trying to use coreimage to investigate some image properties and I'm having a little trouble figuring out the histogram function. I'm not sure if the way it's implemented is a mistake or if I'm just not seeing its value
For instance if I make a simple gradient and then look at the numbers produced by the histogram they don't really add up:
coreimage = ximport("coreimage")
canvas = coreimage.canvas(1, 256)
# make gradient to get a fairly even distribution of values
l = canvas.layer_linear_gradient()
# get the pixels
p = l.pixels()
hist = p.histogram()
#histogram values from red channel
n = n / n[Numeric.argmax(n)]
n = n / len(p)
I think I just figured out what is going on. Dividing by the max count gives a different normalization of the histogram, which seems to be just as useful from a statistical point of view, but was causeing me problems since it wasn't obvious how to get the raw pixel numbers.
At the time I wrote the histogram function I made it so that it would visually produce the same output as a histogram from Photoshop. Now that I look at the numbers again, the output is indeed difficult to interpret. There are some other issues as well (e.g. indices for a channel seem to go only up to 253) so I'm going to have to review this code.
Thanks for pointing it out.