Learning Color Constancy

Peer reviewed: 
Yes, item is peer reviewed.
Scholarly level: 
Faculty/Staff
Final version published as: 

Funt, B., Cardei, V., and Barnard, K., "Learning Color Constancy," Proc. IS&T/SID Fourth Color Imaging Conference: Color Science, Systems and Applications, Scottsdale, AZ, pp. 58-60. Nov. 1996.

Date created: 
1996-11
Abstract: 

We decided to test a surprisingly simple hypothesis; namely, that the relationship between an image of a scene and the chromaticity of scene illumination could be learned by a neural network. The thoughts was that if this relationship could be extracted by a neural network, then the trained network would be able to determine a scene's Illuminant from its image, which would then allow correction of the image colors to those relative to a standard illuminance, thereby providing color constancy. Using a database of surface reflectances and illuminants, along with the spectral sensitivity functions of our camera, we generated thousands of images of randomly selected illuminants lighting 'scenes' of 1 to 60 randomly selected reflectances. During the learning phase the network is provided the image data along with the chromaticity of its illuminant. After training, the network outputs (very quickly) the chromaticity of the illumination given only the image data. We obtained surprisingly good estimates of the ambient illumination lighting from the network even when applied to scenes in our lab that were completely unrelated to the training data.

Description: 

Presented at the CIC 2003 IS&T/SID Color Imaging Conference, Nov. 1996.

Language: 
English
Document type: 
Conference presentation
Rights: 
Rights remain with the authors.
File(s): 
Statistics: