We are working on a research project to create an automated planetary image analysis system. The goal is to teach a computer, using examples, how to recognise features such as craters, dunes etc. The computer will then make measurements of these features automatically in new images. This is an STFC funded project run by Dr Neil Thacker, Prof. Jamie Gilmour, Dr Merren Jones, and PhD research student Paul Tar (myself). The project has been running for the past 3 years in collaboration with the Earth, Atmospheric and Environmental Sciences and the Imaging Science & Biomedical Engineering departments at the University of Manchester.
There are many, many thousands of images being beamed back from across the solar system. From the Moon, Mars, Mercury, asteroid Vesta, we have hi-resolution images covering a large percentage of these surfaces. The kinds of features found in these images are useful to study. Craters can be used to determine surface ages; dunes can tell us about weather patterns and grain availability; fissures can tell us about tectonic activity.
However, there are too few trained planetary scientists to go through and analyse each and every image. There have been many proposed methods to automate the process of counting craters, identifying fissures and channels, but none have been widely adopted because researchers just don’t trust them. There’s a huge trust issue. Without checking the results manually, it’s hard to believe the outputs. An automated analysis tool will make mistakes, there’s no getting around that. There will always be noise in measurements. There will always be misidentified features or failed attempts to identify features.
The difference between our method and others, the reason why we believe our approach will lead to a system that really is trustworthy is the way we handle uncertainties and errors. The standard approach to quantifying the mistakes an automated pattern recognition system makes is to apply the pattern recognition system to test data for which true measurements, true counts of features, are known, i.e. they have the ground truth. The outputs of the automated system are compared to the known answers and a percentage of mistakes is computed. These empirical error rates are assumed to be indicative of the performance of the automated system in future images where the ground-true values are not known. When analysing something as varied and complex as say the surface of Mars there’s no guarantee that these empirical error estimates will be applicable to every image seen in practice.
Our approach, in contrast, is based on a theoretical understanding of uncertainties. We have a theory which is capable of predicting how well our technique will work on an image by image basis, without the need to know the ground-truth values. Rather than having a one-size-fits-all empirical estimate of errors, we can compute on a case-by-case basis how well our algorithm will perform. We will still make mistakes, especially when the images being analysed are very different from the ones used to train our software, but the point is that our software, through it’s error theory, is intelligent enough to automatically assess how well it did. This will give researchers the confidence to use our method. In short, we can give quantitative measurements with predictable errors on a case by case basis – something that other methods cannot do.
The method has been corroborated using simulated data and also real Martian image data. The software is in an early experimental form and at least a year’s worth of work is needed to turn it into a practical system that can be applied to real planetary science problems. We will be presenting our methods at the RSPSoc 2012 conference in Greenwich in September. People can follow our work on facebook at maptheplanetsproject, or on twitter @maptheplanets. We have been featured on Levenshulme Life news and have also recently been interviewed for the November edition of Sky at Night Magazine.