Here's a nice Aeon piece about another favourite philosophical conundrum : what is knowledge ?
There would seem to be two broad aspects to this : how we actually evaluate data and how we should evaluate data. Like it or not, we tend to let emotions and a host of biases influence how we respond to new information, which makes it extremely important in deciding what's the correct, rational approach that our biases are obscuring.
Science posits at least a partial answer to the latter. Facts, at least, are relatively easy in the scientific world view. Facts are that which is established by repeat observations by multiple independent observers that give consistency using different observational methods. The more of those criteria you meet, the more secure your data point. At a purely practical level, above some threshold it makes sense to accept some things as hard, certain facts. True, measurements always have instrumental and fundamental limitations, but for everyday life it's often perfectly safe to drop this and use "certain" when we really mean "as confident as we can ever be".
Models, though, that’s where it gets tricky. It’s under-appreciated that wrong models can still get highly specific details right And even where they appear to be wrong, sometimes this can only be due to a host of implicit assumptions that were overlooked. So while one can apply the same basic conditions of truth to models as one can to facts, the evaluation is always more difficult. Consistency with one model does not automatically imply inconsistency with another.
But, while nothing can be absolutely certain in the strictest philosophical sense, once we accept some common, practical restrictions as to what we mean by “knowledge”, a kind of certainty can be happily recovered. And this applies to models too, e.g. the shape of the Earth makes testable predictions and can be legitimately described as both model and fact. While I like very much the analogy described here that a model posits explanatory mechanisms which are unknown or even unknowable, with science only concerning itself with purely measurable phenomena, this is not always true. It's an extremely appropriate analogy for forefront, novel research, but it doesn't work well at all for more established findings. You can't really say that atoms are uncertain anymore, or that evolution is just an idea. Those models are also things we can say we know are true, within any useful definition of knowledge.
But... who has to know things ? There things get even more difficult. I always like the health warnings along the lines "it is known to the State of California that this chemical causes cancer", as though there were something extraordinarily special about California that was beyond the ken of mortal men. Clearly, California is not the fount of all knowledge. Here we come into a headlong crash as to who and what we can trust :
There is the larger question of what justifies our beliefs, and there is the narrower question about how justification factors into the life of a thinking, enquiring person. For internalists in epistemology, my belief cannot be justified unless its justifier can somehow be appreciated by me. Externalists deny this; they say I can have a justified belief even if I can’t check whatever makes it credible.
Is this an academic debate? Absolutely, but I don’t think it’s merely academic. True, you’re unlikely to learn about internalism and externalism unless you’re taking upper-level philosophy courses. Nevertheless, what’s up for grabs are rival conceptions of ourselves as knowers and enquirers, and which conception will take precedence in the theory of knowledge. When we ask the hard questions about our fallible intellects, where should we start? What’s our foundational picture?
Perhaps these question deserve better outreach ? More simply put, the question would appear to be, "can I trust an expert in a field I myself don't understand ?". My answer to which is "no, never completely, but always more than I can trust my own assessment." Of course, deciding that they're an expert in the first place raises a whole other layer of difficulty. So how do we get at some basic, hard level of knowledge with which we might judge this ?
Some aspects of this appear to be quite silly :
I have a justified belief about where my dog is because I had an experience reminiscent of the sounds of her feet. And the experiences themselves need no justification, since they are, as the philosopher Roderick Chisholm put it in 1966, self-presenting. They make themselves known. And how could they not? What could be more luminous and manifest than conscious experience?
Well, opinions differ. Wilfrid Sellars articulated a lasting difficulty for the self-presentation idea: raw experience isn’t fit to justify... If you ask me: ‘What reason do you have to think your dog is nearby?’ what good is it to indicate my unrepeatable, inarticulate inner episodes? What good is a certificate of authenticity that can’t be shown? I couldn’t even cite my experience to myself, because as soon I have the experience, it’s gone.
Seen as how all our perceptions are internal and subjective, this would seem to be utterly daft : you could raise the same objections to literally anything, and have no foundations to any sort of knowledge at all. To build on a recent post, it would seem self-evident that whenever we talk of knowledge, the applicant conditions must involve sensory perception. We cannot possibly talk of knowledge in a framework totally independent of perception, even if perception cannot be all there is to it.
So, hearing the dog provides good evidence for the presence of a dog, and in ordinary terms that evidence is more than sufficient to constitute knowledge. The chance that we're being deceived in some way is ordinarily so low as to be negligible, and we can ignore this possibility until we have direct evidence that this is the case. Normally we would anyway have perceived that the wider circumstances are the same as usual except for the sound of the dog, so we'd actually have good grounds for believing that no-one is out to trick us. We'd only go around actively worrying about someone trying to fool us with recordings of dog noises if a) we had other reasons to suspect someone, b) we were super paranoid, or of course c) we're in a philosophy class discussing knowledge. Those are the only three possibilities.
Every internalist view, even weaker varieties, says that justification has to be accessible from your perspective. Justification always comes from inside. But what is this accessibility? It’s usually imagined as something you could have from the armchair, reflecting on your thoughts, so if you’re justified in believing anything, you can find that justification here and now by looking within. Reflection becomes the means of justifying your beliefs, and that places a huge burden on the shoulders of reflection.
...anyone not sufficiently reflective can’t have justified beliefs. We often talk about what very young children know, even before they learn to talk, and we say that our fellow creatures know things too. I might tell you about an eastern phoebe that knows and remembers where her nest is, but can’t tell that one of her nestlings is a cowbird.
Internalism captures the hard rigour of philosophy and science, where proof and argument are the coin of the realm, but when we start taking the credibility of the beliefs of children and nonhuman animals seriously, it comes up short. They typically do not or cannot reflect, so internalism would have us deny that they can have justified beliefs. The externalist says: ‘So much the worse for internalism.’
The argument here isn't quite clear to me as to whether the author means to say that animals do or do not know things. We might fairly say that children, animals and stupid people alike all just have opinions (or unjustified beliefs), not knowledge. As do very clever people outside their comfort zone too, or perhaps only very weak justification. Correlation isn't causation, but correlation does provide evidence - it's just not the whole story. Only a belief having sufficiently rigorous justification might count as knowledge, but we needn't have certainty in all things in order to act. A songbird doesn't need the same qualitative kind of knowledge as an architect to build a nest. It isn't really necessary to say that a child "knows" the Earth goes around the Sun for them to be able to repeat it parrot-fashion; their knowledge is at best crude compared to a qualified astronomer, and more akin to an opinion - it just happens in this case to be correct.
The article goes on to an alternative method of justification :
You can form justified beliefs based on what you see while having no insight into how vision works, or even into the overall reliability of your visual system. For reliabilism, justification flows from the reliability of the process, not its accessibility to consciousness. Hence, reliabilism is an externalist theory, not internalist.
Without the burden of accessibility, externalism can account for the credibility of non-reflective thinkers, such as birds, dogs and toddlers. Frank Ramsey once compared beliefs to maps, so if we model thinking on the production of and navigation by inner maps, it makes sense why we would bring our fellow creatures into the fold. Every thinking thing needs to find its way through environments where the locations of food, friends and enemies can change. So when we’re thinking about credibility and justification for the beliefs of such creatures, we’re interested in what it takes for those creatures to succeed. They need senses that put them in contact with the world. They need reliable processes to lean on.
These two proposals don't appear to be so at odds to me. I, as I thinking being, can form conclusions and beliefs based on pure correlation - I will have rather weakly justified opinions, but opinions nonetheless. If the correlation is very good, with a slope close to 1 and a small scatter, my justification is reasonable, however imperfect it may still be. But I can also go a step further and posit explanations for the correlation - in addition to, rather than instead of, considering the degree of reliability, I also reflect on what's going on. Reliability would appear to be one possible aspect of justification, not a fully-fledged alternative to it.
Another internalist argument has given externalists more trouble... In the first story, you use the certificate to make a decision, and it gets you what you wanted. In the unhappy version, you use the certificate to make a decision, exactly as before, but it goes wrong due to no fault of your own... If you made the right call in the happy case, then you made the right call in the unhappy case. Externalists have struggled to explain why my beliefs in the good and bad cases seem to be on equal footing.
Which recalls the famous Picard quote : "It is possible to make no mistakes and still lose. That is not a weakness. That is life." You can correctly evaluate data, but if that data itself is flawed, so will be your conclusion. This would seem to be equally problematic in this "internalist" view as well, because ultimately all external data is evaluated and interpreted internally. You can't form a conclusion from pure reflection because you'd have nothing to reflect on.
So would knowledge count as knowledge if it was later disproved ? I guess not. All we can decide on is the best method of evaluating information, not the Absolute nature of reality itself. Every finding, in the strictest sense, is provisional, even if many are so firmly established that questioning them would be insane. You can evaluate information correctly and still reach a wrong conclusion - there is only better and worse, not true or false, which are really linguistic conveniences rather than being Really True.
Ultimately, the only way to verify anything is with more data. There is no system of knowledge in which you're constantly and consistently deceived in which you can magically reason your way to the truth. Descartes couldn't possibly overcome his evil demon through thought alone - he would need to peek behind the curtain, so to speak.
Likewise, if you want expert-level knowledge for yourself, the only surefire way is to actually become an expert. Most of the time, most of us have to make do with something lesser, placing trust in those who seem somehow competent and trustworthy. What we can all do, however, is to do a minimum level of investigation to check if basic things that distant experts are saying is verifiable and sensible. If you can't make it to becoming an expert, or even an amateur, at least do some basic checking besides evaluating personality and character. At least try to do as much as you can. If you don't do this - if you place your faith in an "expert" purely on the basis that you like what they're saying, or even worse because you think they're a nice person with good hair, then you can still have an opinion, but it risks being worse than merely wrong. In all probability, it won't even be valid at all.
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.