I recently got into an argument with a philosopher about how low intelligence people can trust in experts. Here is an example. Let’s say I don’t understand biology or chemistry. Then how do I know my doctor is doing the right thing? You could say, “doesn’t the doctor have a medical degree from a college where they teach science?” Sure, but how do you know the doctor’s teachers actually know science? You’re stuck. If you are too dumb to know X, you’re too dumb to know if the system that certifies X is doing a good job.
Here, I’ll mention a few possible solutions and comment on them. First, you could look at outcomes that you do understand. For example, if my friend goes to the doctor, and his pain is gone, then maybe the doctor can be trusted. This is a very limited solution at best. This only works if the “solution” is itself fairly easy to understand. The expert’s action is correlated with something you can easily see.
But many experts deal in really hard problems. For example, stupid people can’t understand the problems of quantum mechanics, much less their solutions. So “outcome observation” is limited at best. Also, you might mistake corelation for causation. Another serious problem of outcome observation.
A second possibility is simplified demonstration. This refers to the fact that an expert can demonstrate high competence in a subset of problems that stupid people do understand. For example, maybe a high school calculus student may not understand advanced math. But if an expert walks in and solves all the hard problems in the calculus book quickly and without error, they might be an expert.
This solution has flaws as stated. Sure, maybe the person is amazing at solving easy problems X1 you do understand but how does that get you to trusting the guy for harder problems X2? It doesn't.
If you believe that skill at X1 correlates with X2, you do get a weaker Bayesian solution to the stupid-expert problem. Skill at X1 increases your belief that the alleged expert might know something about problems X2 that are a little harder than you can handle. You aren't certain but demonstrated skill increases confidence.
The Bayesian version of simplified demonstration then implies that stupid people trust people on issues that are just beyond their ability level. This weaker approach has problems as well. How can stupid people distinguish expertise claims thay vary on different ability levels? For example, how can a high school student tell the difference between someone who majored in math in college and a Fields medalist?
I suppose you could say the college math major and the Fields medalist are both experts compared to the high school student and there is truth in that. And it makes sense. For example, as far I am concerned, the registered nurse, the family doctor, and the Nobel prize winner in medicine are all equal experrs in treating my simple ear infections.
Perhaps the best we can do is live in webs of weak confidence in experts. If I can only understand X1, that allows me to be confident in people who can do X2. If experts in X2 say there are even smarter people who can do X3, I may have confidence if X2 experts are good faith actors. In other words, in a world of honest people, I can have some confidence in a chain where 1. I know X1 and have a non-zero confidence that some people super skilled at X1 may actually be skilled at related X2 problems I don't get, 2. people who know XN can guess that some people are actual XN+1 experts, and 3. people are good faith actors who don't lie aboit who think is an expert. And isn't this how real world science communities operate, with tiers of experts and good faith?
Bottom line: Not being stupid is hard work.
+++++
Buy these books!
Grad Skool Rulz - cheap ($5) advice manual for grad students
Obama and the antiwar movement