Deep neural networks enable color constancy algorithms to estimate the global illumination of a scene in order to correct it and give it a plausible look as it was captured under ‘normal’ white light. However, when the network is trained on images from a single camera and a single scene, it ends up learning the parameters of that camera and will not be able to generalize effectively across different cameras and scenes. Another issue is that the image datasets available for learning the chromaticity of the illumination are scarce. Autoencoders provide a promising paradigm to exploit the underlying structure of chromaticity of images by learning over large numbers of unlabeled images, so that we can achieve a good illuminant estimation over new images. We introduce a novel color constancy algorithm by auto-encoding a large dataset of images and using the model to estimate the illumination. We use two approaches. In the first, we learn a common representation of images and then fine-tune the model to estimate the illumination and, in the second approach, we combine the two steps into one using a composite objective function to allow us to learn to reconstruct and, at the same time, regress to the illumination.