It's OK to be sensitive
I'm prompted to write this by discussions on stealth in space, but really it could apply to absolutely anything. I don't have time to write this up more fully, so this'll have to do.
When we search for something, what do we mean by the "sensitivity" level of our survey ? There are three basic ideas everyone should be aware of.
1) Sensitivity. Any survey is going to have some theoretical hard limit. Astronomical surveys always have noise, political surveys always have errors. If a source is below your noise level you have no chance of detecting it; if your question was flawed there are answers you won't be able to obtain. You might be able to improve this by doing a longer integration or asking more people, but with the data you've actually got, you're always limited. Above this limit, you might be able to detect something. Here's where it gets more subtle and often complicated - it's a very bad idea to take a given "sensitivity limit" and apply this without thinking more deeply about what it means.
You're going to have some procedure for extracting the data that you're interested in, e.g. number of stars in a given region. This procedure, like the data itself, will have its own errors. You're going to find some sources which aren't really stars at all, and miss some thing which are real stars completely. In general, the closer a source (be that a star or anything else that's detected, even if it isn't real) comes to the noise level, the more problems which will result. In particular :
2) Reliability. Some of what you detect will be real, but some of it won't. Reliability is defined as the fraction of things you find which are really what you were looking for. If you go looking for elephants and find 100, but when you take a closer look at your photographs later on you realise than 25 of them were actually cardboard cut-outs that looked like elephants, then your survey is 75% reliable.
For the definition itself, the total number of elephants actually present is irrelevant. In practise, if you're got 9,00 real elephants and 100 fake elephants in a dense forest, this is going to be a lot harder than 9 real elephants and 1 fake elephant in an open field. However, reliability can be quantified relatively easily : you have to go back, examine each source more carefully and if necessary get better data for each one. There's usually some perfect test you can do to distinguish between an elephant and a stick or a very large horse. One should keep in mind that this will vary depending on the particular circumstances; an automatic elephant-finder might be 50% reliable on average, but 99% in open sandy deserts and 2% reliable in dark grey rocky canyons. So even reliability figures must be handled carefully.
3) Completeness. This refers to the fraction of real sources you're interested in that you actually detect. Say you survey a volume of space containing 100 stars and you detect 99 of them. Then your survey is 99% complete. Easy peasy, except that it isn't. In practise, you very rarely actually know how many sources are really present. Quantifying completeness can be much harder than quantifying reliability, because you can't measure what you haven't detected. You can make some approximations based on those things you do detect, and hope that the Universe isn't full of stuff you haven't accounted for, but you can't be certain.
Galaxies are a bit of a nicer example than stars because they're extended on the sky. Consider two galaxies of the same total brightness but with one 10 times bigger than the other. You might think that the larger one is easier to detect because it's bigger, but this is not necessarily so - its light will be much more spread out, so its emission will be everywhere closer to the theoretical sensitivity limit. Whether you detect it or not will depend very strongly indeed on your survey capabilities and your analysis methods. And at the other extreme, if the galaxy was much further away it could look so small you'd confuse it for a star.
What's that ? You say your fancy algorithm can overcome this ? You're wrong. These issues apply equally to humans and algorithms searching data. Now you can, to some extent, improve the quality of the data to improve your completeness and reliability. For example in astronomy it's far more subtle than just doing a deeper survey - different observing methods produce very different structures in the noise, which can sometimes create features which are literally impossible to distinguish from the noise without doing a second, independent observation - no amount of clever machine learning will ever get around that. So choosing a better type of survey or asking better questions can get you a much better result than just taking a longer exposure or asking more questions to find out what the answer is. But even these improvements have limits.
A good example is the recent claim of a drone which automatically detects sharks (http://www.bbc.com/news/av/world-australia-41640146/a-bird-s-eye-view-of-sharks), which apparently has a 92% reliability rate. The problem is that this tells you absolutely nothing (assuming the journalists used the term correctly) about its completeness ! It might be generating a catalogue of 100 shark-shaped objects, of which 92 turn out to be real sharks, but there could in principle be thousands of sharks it didn't spot at all. Of course that's very unlikely, but you get the point.
Completeness and reliability both vary depending on the type of thing you're trying to observe and the method you're using. For example, the drone might detect 92% of all Great Whites but miss 92% of all tiger sharks (for some reason). Or your survey might be great at detecting stars and other point sources but be miserable at finding extended sources. For any survey, the closer the characteristics of your target are to the theoretical limit, the more problems you'll have for both completeness and reliability. In short, the fact that something is theoretically detectable tells you very little at all about whether it will be detected in reality.
So for stealth in space, it's imperative to define very carefully what you mean by stealthy. Do you mean you want a Klingon battle cruiser that can sneak up and poke you in the backside before you notice it ? Or do you just want to hide a lump of coal in the next star system across where, hell, it's difficult enough to detect an entire planet ? What level of risk are you willing to accept that it won't be detected - or might be detected, but not actually flagged as an object of interest ? Because whatever survey is looking for your stealth ships is gonna have some level of completeness and reliability which will depend very strongly on the characteristics of what it's searching for. It might very well record photons from the ship, but that tells you nothing about whether anyone will actually notice it.
Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby
Subscribe to:
Post Comments (Atom)
Whose cloud is it anyway ?
I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...
-
"To claim that you are being discriminated against because you have lost your right to discriminate against others shows a gross lack o...
-
For all that I know the Universe is under no obligation to make intuitive sense, I still don't like quantum mechanics. Just because some...
-
Hmmm. [The comments below include a prime example of someone claiming they're interested in truth but just want higher standard, where...
Good write-up on why statistics are never straightforward.
ReplyDelete... at turns, I believe it's better to establish the terms of the survey itself, do the hard work - and treat your survey data as if it were being used to answer other questions than yours. Don't let the supposedly error-filled data collection protocols scare you off: if you're being as consistent as possible it's not an error, it's a limit .
ReplyDeleteIn software, there's a pattern called Observer, with a sister pattern called Subject or Observable. It might surprise you to see how often even highly competent programmers commingle the two. Moses Maimonides: "Teach thy tongue to say 'I do not know', and thou shalt progress."