Computing Science, School of

Receive updates for this collection

Synthesis of Acoustic Timbres using Principal Component Analysis

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
1990
Abstract: 

We have developed an alternate method of representing harmonic amplitude envelopes of musical instrument sounds using principal component analysis. Statistical analysis reveals considerable correlation between the harmonic amplitude values at different time positions in the envelopes. This correlation is exploited in order to reduce the dimensionality of envelope specification. It was found that two or three parameters provide a reasonable approximation to the different harmonic envelope curves present in musical instrument sounds. T he representation is suited for the development of high-level control mechanisms for manipulating the timbre of resynthesized harmonic sounds.

Document type: 
Conference presentation
File(s): 

Color from Black and White

Author: 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
1989
Abstract: 

Color constancy can be achieved by analyzing the chromatic aberration in an image. Chromatic aberration spatially separates light of different wavelengths and this allows the spectral power distribution of the light to be extracted. This is more information about the light than is registered by the cones of the human visual system or by a color television camera; and, using it, we show how color constancy, the separation of reflectance from illumination, can be achieved. As examples, we consider grey-level images of (a) a colored dot under unknown illumination, and (b) an edge between two differently colored regions under unknown illumination. Our first result is that in principle we can determine completely the spectral power distribution of the reflected light from the dot or, in the case of the color edge, the difference in the spectral power distributions of the light from the two regions. By employing a finite-dimensional linear model of illumination and surface reflectance, we obtain our second result, which is that the spectrum of the reflected light can be uniquely decomposed into a component due to the illuminant and another component due to the surface reflectance. This decomposition provides the complete spectral reflectance function, and hence color, of the surface as well as the spectral power distribution of the illuminant. Up to the limit of the accuracy of the finite-dimensional model, this effectively solves the color constancy problem.

Document type: 
Article
File(s): 

Color Constancy from Mutual Reflection

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
1991
Abstract: 

Mutual reflection occurs when light reflected from one surface illuminates a second surface. In this situation, the color of one or both surfaces can be modified by a color-bleeding effect. In this article we examine how sensor values (e.g., RGB values) are modified in the mutual reflection region and show that a good approximation of the surface spectral reflectance function for each surface can be recovered by using the extra information from mutual reflection. Thus color constancy results from an examination of mutual reflection. Use is made of finite dimensional linear models for ambient illumination and for surface spectral reflectance. If m and n are the number of basis functions required to model illumination and surface spectral reflectance respectively, then we find that the number of different sensor classes p must satisfy the condition p≥(2 n+m)/3. If we use three basis functions to model illumination and three basis functions to model surface spectral reflectance, then only three classes of sensors are required to carry out the algorithm. Results are presented showing a small increase in error over the error inherent in the underlying finite dimension models.

Document type: 
Article
File(s): 

Natural Metamers

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
1992
Abstract: 

Given only a color camera's RGB measurement of a complete color signal spectrum, how can the spectrum be estimated? We propose and test a new method that answers this question and recovers an approximating spectrum. Although this approximation has intrinsic interest, our main focus is on using it to generate tristimulus values for color reproduction. In essence, this provides a new method of converting color camera signals to tristimulus coordinates, because a spectrum defines a unique point in tristimulus coordinates. Color reproduction is founded on producing spectra that are metamers to those appearing in the original scene. Once a spectrum's tristimulus coordinates are known, generating a metamer is a well defined problem. Unfortunately, most color cameras cannot produce the necessary tristimulus coordinates directly because their color separation filters are not related by a linear transformation to the human color-matching functions. Color cameras are more likely to reproduce colors that look correct to the camera than to a human observer. Conversion from camera RGB triples to tristimulus values will always involve some type of estimation procedure unless cameras are redesigned. We compare the accuracy of our conversion strategy to that of one based on Horn's work on the exact reproduction of colored images. Our new method relies on expressing the color signal spectrum in terms of a linear combination of basis functions. The results show that a principal component analysis in color-signal space yields the best basis for our purposes, since using it leads to the most “natural” color signal spectrum that is statistically likely to have generated a given camera signal.

Document type: 
Article
File(s): 

Experiential Reasoning

Author: 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
1992-03
Abstract: 

At the time I developed Whisper in 1976 1 encountered extremely skeptical---often downright hostile---audiences and as a result I (mistakenly) published only an expurgated account of my thoughts about analog representations. 1 I am delighted that times have changed a bit and this symposium seems like an appropriate forum for a little speculation and re-evaluation.

Document type: 
Conference presentation
File(s): 

Learning Color Constancy

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
1996-11
Abstract: 

We decided to test a surprisingly simple hypothesis; namely, that the relationship between an image of a scene and the chromaticity of scene illumination could be learned by a neural network. The thoughts was that if this relationship could be extracted by a neural network, then the trained network would be able to determine a scene's Illuminant from its image, which would then allow correction of the image colors to those relative to a standard illuminance, thereby providing color constancy. Using a database of surface reflectances and illuminants, along with the spectral sensitivity functions of our camera, we generated thousands of images of randomly selected illuminants lighting 'scenes' of 1 to 60 randomly selected reflectances. During the learning phase the network is provided the image data along with the chromaticity of its illuminant. After training, the network outputs (very quickly) the chromaticity of the illumination given only the image data. We obtained surprisingly good estimates of the ambient illumination lighting from the network even when applied to scenes in our lab that were completely unrelated to the training data.

Document type: 
Conference presentation
File(s): 

Bootstrapping Color Constancy

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
1999-01
Abstract: 

Bootstrapping provides a novel approach to training a neural network to estimate the chromaticity of the illuminant in a scene given image data alone. For initial training, the network requires feedback about the accuracy of the network’s current results. In the case of a network for color constancy, this feedback is the chromaticity of the incident scene illumination. In the past1, perfect feedback has been used, but in the bootstrapping method feedback with a considerable degree of random error can be used to train the network instead. In particular, the grayworld algorithm2, which only provides modest color constancy performance, is used to train a neural network which in the end performs better than the grayworld algorithm used to train it.

Document type: 
Conference presentation
File(s): 

Tuning Retinex Parameters

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2004-01
Abstract: 

Our goal is to understand how the Retinex parameters affect the predictions of the model. A simplified Retinex computation is specified in the recent MATLAB™ implementation; however, there remain several free parameters that introduce significant variability into the model’s predictions. We extend previous work on specifying these parameters. In particular, instead of looking for fixed values for the parameters, we establish methods that automatically determine values for them based on the input image. These methods are tested on the McCann-McKee-Taylor asymmetric matching data, along with some previously unpublished data that include simultaneous contrast targets.

Document type: 
Article
File(s): 

A Large Image Database for Color Constancy Research

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2003-11
Abstract: 

We present a study on various statistics relevant to research on color constancy. Many of these analyses could not have been done before simply because a large database for color constancy was not available. Our image database consists of approximately 11,000 images in which the RGB color of the ambient illuminant in each scene is measured. To build such a large database we used a novel set-up consisting of a digital video camera with a neutral gray sphere attached to the camera so that the sphere always appears in the field of view. Using a gray sphere instead of the standard gray card facilitates measurement of the variation in illumination as a function of incident angle. The study focuses on the analysis of the distribution of various illuminants in the natural scenes and the correlation between the rg-chromaticity of colors recorded by the camera and the rg-chromaticity of the ambient illuminant. We also investigate the possibility of improving the performance of the naïve Gray World algorithm by considering a sequence of consecutive frames instead of a single image. The set of images is publicly available and can also be used as a database for testing color constancy algorithms.

Document type: 
Conference presentation
File(s): 

Estimating Illumination Chromaticity via Support Vector Regression

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2004-11
Abstract: 

The technique of support vector regression is applied to the problem of estimating the chromaticity of the light illuminating a scene from a color histogram of an image of the scene. Illumination estimation is fundamental to white balancing digital color images and to understanding human color constancy. Under controlled experimental conditions, the support vector method is shown to perform better than the neural network and color by correlation methods.

Document type: 
Conference presentation
File(s):