Very important questions to consider. We must rely on experts, because it's physically impossible to become an expert in everything. Therefore, the ability to judge who the experts are is probably the most important skill in critical thinking. Though I don't think the task is so hopelessly difficult in practice. One thing to note: just because we can't reach certainty in our judgement, doesn't mean that we can't find our experts effectively. A "probabilistic classifier" that is biased towards "eliminating false positives" will do the job just fine. A public intellectual confidently stated something stupid about a topic I know better? Into the trash he goes! Maybe I lose something of value, but I can't afford to trust him on unknown topics anymore. My experts should know what they know and what they don't know. In practice, most influential pseudoexperts can be easily dismissed with relatively little effort. It's not like you need to be a virologist and a public health expert to see the folly of antivaxxers. The debates that are truly difficult to adjudicate for non-specialist are very specific and rarely concern issues of interest to the laymen. A skilled critical thinker will usually find some dead giveaway that a purported expert is in fact a pretender, for issues that really matter at least. What's more, a general understanding of the scientific process helps to anticipate which results are solid and which might be dubious. Old, well-researched and often replicated results are to be trusted. Anything on the cutting edge is up for debate, even for the experts. If a result is new, counterfactual to "established knowledge", offers radically new interpretations of facts, shows signs of spurious methodology or possibility of p-hacking, is popular in the media outside expert circles, interdisciplinary and not validated by experts in all domains -- those are all factors that should make us wary. Science from decades ago that made it into textbooks is certainly right on the facts, even if they might need to be reinterpreted in a future paradigm shift. Finally, yes, non-experts distort the views of experts all the time. More often than not, when you read an article about some scientific discovery and go to the source, it turns out that it says something else. Usually the same claim, but with plenty of restrictions and caveats, but sometimes something straight up completely different.
I completely agree with you on the importance of relying on experts! However, I'm not confident in our ability to identify those that deserve our trust. Your general rules have some potential but I can see clear ways that they may fail dangerously.
I believe there's a significant issue at hand. Let's take a moment to consider those who oppose vaccinations. Have you read any expert opinions from the anti-vaccination camp? I personally find that they present robust arguments and data to support their stance. They also provide a compelling explanation as to why their viewpoint isn't mainstream, highlighting the financial incentives that exist at every level-from medical education, scholarships, research, development, production, deployment, promotion, governance, to the practice of medicine itself-that encourage belief in the current vaccination paradigm. There are two key points to consider here: Firstly, they can demonstrate that, statistically, nearly all disease symptoms vanished when improvements in nutrition and hygiene were made, long before vaccines were introduced in those areas. They argue that the decline in disease following vaccination is based on disease diagnosis, rather than an actual reduction in symptoms within the population. Secondly, they point out the existence of verifiable structures that instill belief in the effectiveness of vaccines from an early age, and later incentivize this belief through profit. Given these points, I believe that some anti-vaccination experts can make a persuasive argument questioning the reliability of expert consensus.
@luszczi: I suppose Kane doesn't find the following omission you made that I will discuss to be especially important given that he loved your comment, but I am really disappointed that even in a post consisting of hundreds of words, you don't even _mention_ the idea of quantities of experts as weights for belief, despite how relevant, probably underpinning, it is to the specific topics into which you delved. If someone says something "stupid" about something which you know "better," all we need to do here is introduce the variable that is instantiated in the case that a large number of experts, instead of some small minority of them, are saying the "stupid" thing, and now you no longer know if it's "stupid" or not. Would you throw all of those experts into the trash? Note that you yourself being an expert in the field certainly doesn’t make the issue clear enough, given its presumed complexity, to be immune to the importance of the weighting question (for example, questions like “to what extent should I alter my view based on a large quantity of other experts disagreeing or agreeing with me?”). I think the problem is that you seem to want to act as if, with these issues, there is this great way to detect something "stupid" that doesn't depend on some kind of meta-analysis of expertise (perhaps a logical fallacy they ostensibly committed that their expertise fails to disguise), yet much of the point of expert-trusting epistemology is to tackle situations in which we _don't_ have such a way of effectively detecting that. If we could straightforwardly see that something is wrong, it would just be a normal question, rather than some epic question that requires appealing to experts who have specialized knowledge. It then might well be wondered whether what you are labeling as obviously stupid is only labeled as such because it is a minority expert view, and that had it been a majority expert view, those seemingly illogical strands in it that you believe that you detect would seem like logical ones - for example, if you think it makes x y z fallacious points (instantiations of the genus of strands to which I'm referring) now, the concern is that you might convince yourself that x y z are actually really clever and logical if instead a majority, rather than minority, of experts believed in them. The strands don’t add anything epistemically if they have no ability to influence your belief; they can then only passively take on the labels you put upon them solely based on whether they are part of a belief that you are trusting based on expert-consensus belief or rejecting based on expert-consensus disbelief. In any event, it is simply indispensable to consider quantities/distributions of experts in various beliefs regarding the field(s) under discussion assuming that you use this to any appreciable extent in the formation of beliefs through the appeal to expertise. That goes for pointing out easy ways of discrediting experts. If you are going to say that there are easy ways of discrediting experts, you are going to need to reconcile this qualitative consideration of experts with quantitative ones. To that end, you could toggle with the weighting to favor one more or less than you previously did, but this should be done with deliberate, rational consideration. Maybe, in so doing, you will weight qualitative features highly enough to find notable instances in which you could confidently say that even a majority of experts are going astray. Alternatively, to go in the other direction, you might question your apparent qualitative weights and just chalk up their appearance of qualitative merit to just it being a logical consequence of the content of the "exceptional wrong expert" most likely being wrong simply because a majority of presumably qualified experts think otherwise, without you necessarily understanding the nuts and bolts of _why_ the experts find it to be flawed, only (to some level of plausibility) understanding _that_ it's flawed. There are many interesting epistemological issues that can come up when thinking about distributions of experts in holding certain beliefs in their field(s) of expertise. For example, might there be some exceptionally smart expert, smarter than the rest of the experts, that holds a view that the other experts are unable to appreciate due to their intellectual inferiority to the greater expert? Perhaps the experts themselves have some kind of bell curve as it were. Maybe the bottom 80% of experts have an IQ of 130 or lower (you can, if you wish, substitute IQ for something you deem more relevant to intellectual competence without reducing this example's illustrative power) but then there is some 2% of experts that have an IQ of 160 or higher, that might be right but can't convince most of the other experts because the ones with an IQ of 130 or lower all happen to, in their peer group, find their own views more intuitive and plausible and see how much they outnumber the 2% of experts that find the opposite view more attractive. It's unclear who is being rational, here. Perhaps the experts with an IQ of 160 or higher should change their viewpoint based on the social-epistemic evidence against them, but then, if this kind of thing happens too often, no one will ever change their views based on new information, because of course the stagnated position could then be used to defeat further new positions, which then further stagnate the position so that it can defeat further new positions even more easily, to then further stagnate the position and hence its ability to maintain its stagnation in the future due to being in this positive feedback loop, in which case we might wonder about just how much of a testament to the truth of something it would really be that an extremely stagnant position practically by design is impervious to any dissenting views to it (even those of experts, so long as only a minority at a time are endorsing such views). There is much more that could be said about this, but it seems that, with the reliance on the quantity/distribution of experts in holding certain beliefs in their field(s) of expertise that is so ubiquitous in this topic, it ought to be carefully analyzed, or at the very least, not artificially isolated from the topic in favor of making an intellectually unreflective, unreconciled statement about being able to detect quacks.
@@derekg5563 You're missing my point, which is about experts venturing out of their ballpark, not about minority expert opinions. If, talking about something they have no expertise in, they make a confident claim that any relevant expert would typically judge as idiotic, that's a sign of flawed metacognition and/or dishonesty. An example of what I meant (to give a very clear one) might be Jordan Peterson and his sophomoric claims about global warming. Having heard that, I will not take his word on anything related to Jungian psychoanalysis (his expertise, but not mine), because I have no trust in this particular expert.
I'm giving a talk this weekend at a climate conference about knowledge, certainty and how we form beliefs and this summary of social epistemology is super helpful! Thanks Kane B, will be giving you a hearty shoutout
8:10 at my experience on all the issues where the public disagrees with Experts in any reasonable numbers The Experts are also disagreeing with one another
The basis of trusting in expertise is akin to a "golden rule" but for epistemology. I trust experts, because I could have gone through the training process myself to become one. I can't become an expert in every field, simultaneously, but I could (have conceivably) become an expert in any single field. Why do I trust our universities to train people like me into becoming true experts? Because I'm in the (beginnings of a long) process of becoming a (lowly) expert in my own field. How can I trust that the institution I'm studying at isn't lying to me and turning me into a false expert? Because the subject I'm studying is math. My entire job as a student is to check (and learn) the logical reasoning behind every line of every proof of every important theorem of every course in my semester plan. Inevitably, every exam study period, I come to the same obvious conclusion as 10s of millions of students before me did, which is that the experts wrote their textbooks and lecture notes without error. If my professors are teaching me math correctly, then I trust they're teaching physics correctly, too. And CS, and biology, and engineering (which is also why I trust our planes and elevators and cars), and so on and so forth. Now, the less rigorous the field (that is, the softer the science), the less certain I am of their correctness, but even in the seminars we STEM students are forced to partake in, I see how political science papers are written, how their studies are conducted, and the data analysis methods they use, and I recognize the rigor. Imagine there's a fence 10 meters wide. You can easily walk that distance along the fence and see the other side. Now imagine you're on one side of that fence, and your friend is on the other. If he tells you there's a squirrel on his side, even if you can't see through the fence, you're inclined to believe him. You trust his testimony, because you could easily verify it by walking 10 meters. Now, to ensure that you're not cheated, it wouldn't hurt to check every once in a while, anyway, because otherwise, your friend could notice your uncritical belief and lie without any consequence. As such, it could also help to put concrete consequences for lying in place. This sounds complicated, but it's the entire basis of the ticketing system for public transportation in Europe (or at least in Zürich). It's an honors system that's reinforced by the fact that freeloaders could get unlucky and have their ticket controlled, and be forced to pay a hefty fine that makes the risk adjusted calculation not worth it. You're better off just buying the ticket, so that's what people in Zürich do. Sorry for the long comment.
Math is like the only field with a good level of immunity to justifiable distrust. Any in-depth research I do into other fields has made me trust the experts of every other field less than when I started, to varying degrees depending on the field.
@@tyrjilvincef9507 Sure, I agree, that's one of the reasons why I chose to study math, but I think that's more of a criticism of those less rigorous fields rather than the experts who work in them. To my eye, the experts are trying their best. They make mistakes, they have egos, they express emotion, they can have ulterior motives, they're flawed humans, etc. But I'm no different, so I trust them; perhaps with vastly less certainty than I would trust the proof of an important theorem in algebra, but I trust them, nonetheless.
1) I don't trust math experts. I even think that they are less reliable, than some linguists. 1.1) It is pretty hard to constantly lie about expertise in a language, which has millions of users. 2) I used to hear a lot some esoteric bullshit about gödel theorem from graduated people. 2.1) For instance, i have heard that it is somehow prove, that we could not reduce math to logic. 2.2) The problem with that claim is in defining, what it means, to "reduce one discipline to another". 2.3) Mathematicians often use sofisticated definitions to concepts, that have different intuitions in common language. 2.4) I am very suspicious of every mathematican's claim, until i would not understand every formal definition they are using in it. 3) Some graduated mathematicans also love to create some scholastical formal, higher order logic, arguments for God: books.google.ru/books?hl=ru&lr=&id=ZQh8QJOQdOQC&oi=fnd&pg=PP1&dq=info:d7HHYpTBngAJ:scholar.google.com/&ots=901ndXj0IR&sig=PnFBwv53e4dF02JlXlthkLtvUBA&redir_esc=y#v=onepage&q&f=false 3.1) Which are full of platonic unjustified axioms. 4) I don't trust mathematical assessments of probability. 4.1) They could arbitrarily add axioms, that asumes some atomic probabilties, from which they will derive the result. 4.2) If i want to know probability, i go to the prediction market: i don't care what people there use - astrology or sophisticated models; As long as they bet on their estimates, they are accountable experts. 5) Mathematicians love claims, like "Conjunction is the same thing as cartesian product" Where they try to watch to the same concept from different angle and highlight similarities without doing formal justification. 6) Mathematicians rarely do real formal proofs 6.1) According to: us.metamath.org/mpeuni/mmset.html#trivia 2+2=4 Requires 26,323 proof steps. Mathematicians usually skip them. 6.2) They also love to use High order logics and type theories in proof, that are not so intuitive to non specialist, as hilbert style proof. 7) If we will assume different axioms, we will get different results. In Quine New Foundations Cantor theorem is false, in ZFC it is true.
@@TheLogvasStudio You don't trust mathematicians on what, exactly? Proving 2+2=4 requires so many steps if you *try* to start with the smallest sensible set of axioms. Those steps aren't just "skipped", they're usually duplicate chains of reasoning from different instantiations of the axioms (or previously proven theorems). You can use a semi-formal system to rigorously prove statements by omitting all the logically true chains of reasoning. You simply schematicize all the tautologies of formal first order logic and integrate them as axioms of your formal theory. Gödel's Incompleteness Theorem simply states that a mathematical theory in which you can do basic arithmetic, there exist statements which are true, but not logically derivable from the axioms. I would recommend looking through relevant sections from the introduction in Halbeisen's "Axiomatic Set Theory" for an alternative perspective.
@@TheLogvasStudio As a math student with special interest in logic, I'll say: 1) I'm assuming this otherwise unjustified point is justified by your other points. 2) Gödel's incompleteness theorem is precise and provable. most likely the other "graduated people" you heard from are watering it down to vocabulary that is hopefully understandable to you. But as with most claims in math, this watering down process is subject to omission, ambiguity, and bias. (2.1) is another key example of this - the reason for this claim and the gist of Gödel's incompleteness theorem is that you can encode formal proofs as natural numbers; and since you can also have proofs *about* natural numbers, that means you can formally prove things about what you can formally prove. By considering a self-contradictory statement (analogous to a liar sentence - "This statement cannot be proven"), we see that there are (mathematically) true statements that (logically) cannot be proven - this is the disconnect between logic and math. In other words, this is the disconnect between what is true (as statements about mathematical objects like the natural numbers) and what is provable. As a separate point that might be more relevant to your experience, many graduates have gotten comfortable with the rigour of math and are conversing with/about notions that are abstracted far beyond the nitty gritty rigour of math, and may reflect their personal perspectives and goals instead of the objective math. They may use this same overarching language with you, but it's understandable that they seem to be living in a different world from you. All these claims they make should be able to be translated back to rigorous steps if it ever comes down to it. 3) I agree with your analysis (3.1). I think what's going on here is that the author(s) is/are mixing math with philosophy and religion. The math (i.e. the inferences of further facts from the axioms) is sound. (Well, should be anyway.) It's the axioms themselves that are philosophically/religiously justified, not mathematically. 4) I find my answer to this point similar to that of (7). 5) That's probably some people trying to be too edgy with their math knowledge and drawing more similarities between mathematical ideas than is "necessary" (in quotes because that's a subjective judgment). There CAN be mathematical contexts where these things are "the same" (most likely one is defined as the other as a way to implement one mathematical system in another), but it's only in a certain context. 6) I think this point is where you have the most misimpressions in. A simple analogy I would make is that "formal proofs" are like programming in assembly language - it is overly tedious, not human-friendly, and not necessary, when we have higher-level languages like Java and Python available. Higher-order logic (second-order logic in particular) is what I feel is much closer to ordinary mathematics than most first-order systems. 6.2) I think you're dreadfully wrong that Hilbert-style (first-order logic) proofs are more intuitive than higher-order logic or type theory. If humans needed to perform 26,323 proof steps to conclude that 2+2=4 then we would never have gotten past the stone age. Also, the number of proof steps depends heavily on the formal system you're using. (I assume you do know that the notion of "formal proof" is with respect to a formal system, and that formal system can be specified whichever way.) The huge number of proof steps is due to starting with a formal system that does not natively define and give axioms for arithmetic to begin with, instead starting with mere sets (which it has to prove theorems about first). 7) Yes, that much is very plain to see - math wouldn't be very useful if it couldn't even prove (or even can disprove if you're feeling wacky enough) basic properties about the natural numbers like associativity and commutativity. The choice of axioms isn't mathematically justified per se, but rather motivated by certain mathematical and perhaps philosophical goals - imo one goal is just tradition, and another is a desire to have a mathematical universe (the von Neumann universe perhaps) that can implement all other mathematical systems and express everything that mathematicians want to express. There is a certain appeal to make the mathematical/logical system as simple as possible (as in accepting the axioms that would align most with intuition) yet as expressive as possible as well. In the end I empathise with you, because I have very negative views about philosophy based on whatever bits and pieces I can glean from random readings/videos as well. I think you've been a victim of pop math gone wrong... or at least, there are misunderstandings that have to be clarified before you can grapple with advanced foundational questions about math. Math is very similar to philosophy in a lot of ways (they are perhaps the only armchair disciplines in the whole world).... just that I feel that philosophy never actually settles down to definitions that are both consistent/objective and has deeply examinable/analysable behaviour the way math does, and so I still can't see the sense in it.
But are believes really necessary for praxis? Instead of believing that anthropogenic climate change is true or empirically useful, could I not simply engage in climate activism because I desire to do so? What would believes add to action?
I think any smart approach is to do a little bit of your own critical thinking and a little bit of trusting experts depending on the context like immediacy and your own competence.
Naomi Oreskes's book "Why should you trust science" discuss the topic and bring some historical examples. It's pretty nice, there's also some talks in youtube.
Very instructive and helpfull video .. the best on the topic. Many ideas I have been thinking about, you aeticulate them very precisely. There also another aspect .. that is even experts themselves don't how they are right or what is the ground of their conclusions, it's almost a mechanical subconscious thinking that they acquire through experience it becomes very reliable with time.
You can also look at it in errors- with a wager. In the instance that climate change isn’t happening but we believe it to be, acting upon it will not necessarily be as harmful as if it is happening and we believe it not to be. A type II error is far more harmful in this case than a type I
In those fields that I’ve had the time to study professionally I’ve come to realiE how wrong the experts I used to listen to were. At this point in my life I give the advice of experts in other fields a value slightly above a layperson but not more. Been burned too many times.
What if we access the probability of reliability by a track record of the relatively high amount of successful results (without necessarily accessing the 'success' by scanning through the papers themselves, but rather through practical applications like engineering and medical procedures to which the scientist have contributed) produced by scientists through the training provided by institutions? As there are less institutions to decide between then experts. This seems to me to be a skillful way of going about this problem eventhough it is by no means perfect.
@Kane B my BELOVED!! ty for the video, really interesting so far! Btw didnt believe in stance independent evaluative facts until i considered the statement "kane is awesome"... now im a realist :.( Anyway would love too see more debates and QaA stuff! Hope you have a good day (tips fedora)
And when it comes to bias and deceit you must of course also consider that scientists might very well have their own biases. If you consider that the average scientists is highly educated and at least wealthy enough to afford that education you can already see potential biases. Them (subconsciously) favoring research, hypothesis, and conclusions that would favor people like them. Alternatively scientists can favor NOT doing research that could get them into trouble. Speaking generally I'm still very much pro expert consensus. But it's important to learn why people who are anti-expert consensus come to the conclusions they did.
I would love to hear your opinion about regarding Bayesian inference to the credulity to attribute to experts. Also how does this fit into the need to make/take decisions based on what experts opine. ie. can't we add an option between the ones you briefed where we combine our own direct data together with reports of expert opinions to arrive at an estimate for the likelihood of the proposition being true?
Any video of yours that I steel myself to watch, I leave with the conclusion "Well, bloody hell - no one can ever really know that they know anything, can they? What a mess." I'm never sure whether to find it liberating or depressing, or both. I feel depressed, I know that much (or do I? Oh no...) My instinct is often to retreat to something like "I suppose we can all only do what we can, and that will have to be enough". But it doesn't really cut the mustard emotionally.
I wish you've explained what scientific antirealism would add to this topic. I mean, if S is a scientific antirealist, it seems they have a reason to simply reject any expert testimony on any scientific explanation of any scientific issue. Is that an accurate presentation?
I tend to do this because my knowledge is limited and it’s impractical to become highly educated on every subject. It’s a pragmatic. Is this problematic?
Great video, but I'd contest that agnostic isn't quite the same as simply skeptic imho. The agnostic concludes indecision but the skeptic keeps prodding the new information for holes.
"I literally don't care about facts or science or books beause I'm based. I believe what I believe because it sounds good and because its based and red pilled." -BG Kumbi, expert epistemologist
Yet the people would prefer to have them every 3 months. Are you not a proponent of democracy sir???? Speaking of which, that could be an interesting video topic
think about a machine that has all the knowlege that humatity has and answers your questions %90 correctly but you dont know how this machine finds out this answers so when you ask a question to this machine like is climate change occuring and it answers but you dont like its answer because you have serious doubts about this answer and you make some research and even after this research you still think the machine's answer is false so would you believe this machine is right or would you say this answer is just one of the %10 false answers
well my answer is i should trust my own ideas because i know how i found out the answer but i dont know how the machine found out the answer so altough according to another person the machine is more trustfull it is not for me because i can follow every single step that i made to reach this answer but i cannot follow the machine's steps
so this discussion is not between a nonexpert and an expert it is between me(the only mind that i can reach in the entire universe) and a person who has high probability about tellin the truth
I state the sources in the video -- e.g. Goldman's "Experts: Which ones should you trust?", Huemer's "Is critical thinking epistemically responsible?", Sovacool's "Exploring scientific misconduct". Is there a specific part you want further sources for?
@@KaneB thanks kane I just mean I wanted links in the description so I don’t have to scroll through 45 minutes to find sources. Keep up the good work you’re my free philosophy teacher
Another way to view the theory of expert testimony is through organisational interdependence. I tend to trust methodology for a current professor/instructor in a field (say the mechanism of deep learning in AI neural networks that bring the marbles of generative pre-training transformers) more than a professor who has left the field and is working for the corporate sector like Google with U tube as its subsidiary. So organisational experts as a social categorisation is intrinsic to self categorisation within a system of belief which turns out to be a political community that tends to bias judgement towards group skepticism ( so not experts as a bandwagon approach, but sort of expert within a field whom I share an eschatological sense of place, such as theologians who advocate world came into existence 5000 years ago but more inclined to go with the theologian who advocates cyclic time, or not agree with the particle physicists who claim wave function collapse but do sense an affiliation with multi wordlists and decoherence theorists and so on) as in the case of climate change and AI existential threat as real. So in many fields there is controversy as in AI and I might end up holding off opinion until at least I sense I have a grasp of core concepts to go with a political community of thought. The other alternative is to have a bumper sticker 'I fish and I vote!
With respects to climate change, one thing that might be interesting is that it's an issue relevant to many different fields, and many respectable high status qualified experts in one field can attract negative attention from experts and more lay people in other fields, examples might include Richard Tol, William Nordhaus, Sam Fankhauser and others, whose work has attracted lots of negative attention from those more alarmist than them, despite the fact that they are clearly leading experts in the field of estimating the economic impacts of climate change and social cost of carbon etc. Nordhaus even has a Nobel for his work integrating climate change into long-run macroeconomic analysis. Yet seems far from being immune to both low quality and high quality criticism.
ahh yes, william "Even a 6°C increase in global temperature would reduce GDP by just 8.5%, because industries that account for 87% of GDP, are undertaken in carefully controlled environments that will not be directly affected by climate change" nordhaus. atrociously idiotic assertion based on biophysically delusional economic modelling.
@@real_pattern Exactly what it means to be an expert in a certain field these days. You are completely cleared to stick solely within the models approved by your profession even if they completely contradict the achievements of dozens of other fields which show those models as utterly absurd and completely delusional.
In my life I understand that I don't really give a fuck if it's an expert or not. I always tend to bounce new data off of my own experience, understanding, knowledge, etc and if I don't agree a white coat writing or speaking words won't change itm I can be pretty unmoving unless faced with something so overwhelming and undeniable I have to acknowledge it.
My view here is that, my own trust of scientific experts is irrational. There isn’t any non-circular reason for me to trust them. It reminds me clearly of the old problem of the criterion and has all the same difficulties. One possibility is appealing to the success of science. I can trust scientific experts because they have succeeded so far with all their amazing technological gizmos. I think that’s probably what has most of us trusting scientists - the apparent effectiveness. It isn’t clear though, that the apparent success of niche technologically relevant areas of science should license a wholesale trust of everyone who gets called “scientist”. That strikes me as a very shakey inference, with a lot of faith needed to sure it up. It isn’t like all scientists use one successful method. They all use legions of very different methods, so how can the apparent success argument generalise across all of them? I think our culture have a sort of blind faith in “scientists” these days.
After fainting suddenly a few times, I went to the doctor. My heart rate was around 46/50 per minute. The cardiologist told me that I was in perfect health and recommended that I see a psychologist. I would like to ask you, if I were a fool who believed in "experts," I'd do what this doctor told me? Believing is one thing, evaluating the opinion of an expert is the attitude of a rational person. Take, for example, the crisis in psychology due to the fact that a minimum of 35% of its research is fraudulent!!! More on that, when I was young, the experts published a lot of docs saying that by the year 2020 this place would be under the water. Well, I am still here! Remember, the Church during the Middle Ages was a source of the experts as the media is now. Just a point: you are using a sophism: should I believe on experts or crazy people like creationists or flat earths. Here is an example: "Should you, people, believe in Stalin who wants order and prosperity for us or on those drug addicts who wants freedom, unrestricted freedom to use drugs all day long, who wants the right to vote on candidates who wants democracy in order to allow all people to use drugs all day long and so to destroy our country? Read the books of Soviet Academy of Sciences, read our experts! What do they say: democracy is a weapon of the imperialism!"
drinking game idea: Take a shot each time Kane says "expert". The experts recommend not to play this game since you'd likely die by the end of the first section.
We are all earthbound misfits, trapped in the who, what, why, where, when and how. Yet it seems that no matter what we think, question and theorise about, there is more than enough data to substantiate some of the claims. Even an absence of something is proof an absent something else😁 Yes we can measure in an ever increasing way but can we really explain ?....are experts really fulfilling that role ?
I think the problems of the expert-layman relationship open a very large door for skepticism. Society is structured around a division of labour that separates most people from the production of knowledge. The production of knowledge itself is divided into various sciences. These again are divided into many specialties. The way how knowledge is distributed across the members of society thus can be described as: a whole lot of different expertises on various special matters, most of them practical, some theoretical. The way how ignorance is distributed in turn is different: Almost everybody is ignorant of almost everything - except for one's expertise. (It is fairly well established that introducing specialisation into the manufacturing process raised output immensely, for which it must almost be regarded as important as the introduction of machines. But if we transfer this assumption to the production of knowledge, we arrive at a certain kind of conondrum: The knowledge of humanity can only grow at a sufficient rate by humanity splitting into specialties, thus by the individuals comprising humanity increasing their ignorance in proportion to the expertise gained by the relevant experts. (After all, being an expert requires focus on one thing to the detriment of everything else.) While the sum of knowledge might increase as this process continues and deepens, the sum of the "unknown knowns" (i.e. of knowledge known by almost nobody) grows at the same rate. Put differently: Since everybody is a layman about basically everything, the growth of knowledge and the growth of ignorance (of unknown knowns) are identical under specialisation. Collectively speaking, what we know might already outnumber the things we don't know (but I can understand doubt about that assumption). Individually speaking, what we don't know certainly outnumbers what we know to a towering degree. And this situation will only get worse as specialisation deepens. If this thought of mine was a bit more... visceral in nature, I would simply doubt everything (on the grounds of being, by the inner workings of the productions of knowledge under the division of labour, separated from knowing almost everything.. and this is true for everybody, even for experts on certain areas of knowledge themselves.) I think the challenges posed by this kind of skepticism are by no means small: For instance, if I am not an expert about a science and I am doubtful what to believe, I might cling to the opinion of expert a or b based on assumptions about how science works and how I should evaluate the expertise of a or b based on those assumptions. But do I know how science works? I am separated from science and so I don't have direct knowledge about its inner workings. I am again dependent upon experts to tell me how science works/ is supposed to work.
"Why trust experts" probably isn't phrased correctly. Why trust priests? Why trust l ron hubbard? Why trust popular streamers that have made it (inherent 0.1% success rate). From my understanding, there are at minimum philosophical, methodological, and structural differences between those groups mentioned above and 'certain categories of experts'. I don't know how to phrase the question correctly but I think "skin in the game" highlighted an interesting phenomenon. Certain experts die if they fuck up. Other experts are rewarded. Therefore they are not the same kind of thing.
All an expert can do is point to information for you to look at. An expert shouldn't be telling you what to think. They should be able to explain how and why their opinion is valid in a logical manner. If they cannot do this, they either lack the ability to effectively communicate, or they are intellectually dishonest. And nearly all university-eductated experts are Pavlovian conditioned to accept ideas they should be questioning.
At the moment it is the phd in economy believe in climate change and the PhD in atmospheric science with a lot of experience is against climate change.
Sorry, modern skepticism is not suspended judgment... it is rather a midway to critical thinking, it is considering the evidence and coming to a provisional assessment with a full conscious acknowledgment that I may be wrong and will revise my belief at every credible piece of new evidence... Using a classical definition of "skepticism" is just committing a black and white and gray fallacy... there are other options.
He used the term in the standard way I see it used in academic philosophy. What you're describing just sounds like reasoning. Philosophers don't tend to use the term "skeptic" like that.
As Aaron said, I'm using the term in the way that's standard in contemporary epistemology. But if for some reason you don't like seeing the term used that way, that's cool. We can just use a different word. It doesn't particularly matter how we label things.
@@KaneB Thank you for replying, It is not really the term I object to, rather that "There are three strategies", excludes the view of "modern sceptics" but includes a skeptical view I think few if any hold to. You clearly expound on the issues in the rest of the video...but then fail to give a strategy to deal with the issue (which is not one of the three presented btw). Critical Thinking and Skepticism as presented are not viable strategies. On your list, this leaves only deference, and practically people select the purported experts based on social validation, not on technical merit. Once selected they make these beliefs part of their core identity, and resist change at all costs - that is a problem. So the better strategy to deal with the issue, is to acknowledge that we cannot suspend belief, that we must take action, but that we do so tentatively, without making it part of our core identity, that we remain open to evaluating alternatives, and new evidence whenever it is presented. Maybe "modern skeptics" is not the correct term to use for this view, I would accept any term...but add it to the three listed.
@@truthseeker2275 Few people are global skeptics, but I think the skeptical approach to pretty common with respect to specific propositions. Most of us do suspend judgement about some of the things we think about. For instance, I have no idea whether there is intelligent life in the Andromeda galaxy; I suspend judgement about this. I'm not sure how your strategy is an alternative to the three presented. Neither critical thinking nor deference need involve becoming certain that P, or making P part of one's identity, or resisting changes at all costs. A person might provisionally accept P on the basis of critical thinking, or provisionally accept P on the basis of expert testimony. The issue is about the source of one's belief, not the strength of one's belief.
I wish you've explained what scientific antirealism would add to this topic. I mean, if S is a scientific antirealist, it seems they have a reason to simply reject any expert testimony on any scientific explanation of any scientific issue. Is that an accurate presentation?
It's a good question. The realism/antirealism debate relating to empiricist challenges is tangential to this issue, I think. Take scientific realism versus e.g. van Fraassen's constructive empiricism: (SR) To accept a scientific theory is to take it to be true in general. (CE) To accept a scientific theory is to take it to be empirically adequate, i.e. true of the observable phenomena. A scientific realist and a constructive empiricist might accept the same scientific theories, on the same grounds: that is, they might find the same evidence compelling, or be persuaded by the same experts. Where they differ is on the question of how much this theory tells us about the world. I could be a realist who is extremely skeptical of expert scientific testimony. Consider a person who rejects the expert consensus in various fields, but who still believes that various theories uncover the facts about reality. Most young-earth creationists, for instance, are realists of this type. By contrast, I could be an antirealist who is extremely deferential towards expert scientific testimony. I might blindly accept whatever scientists tell me, but then favour a philosophical interpretation of their claims in line with constructive empiricism. By contrast, the realism/antirealism debate as it relates to social constructivism is more closely intertwined with questions about expert deference. Social constructivists tend to say that what counts as evidence, what counts as "proper" scientific methodology, is a matter of social negotiation; there are no independent facts about how some data is to be interpreted or whether it really supports a given hypothesis. Now, it doesn't follow from this that a social constructivist has a reason to reject expert testimony. Indeed, if I take myself to be part of the scientific enterprise, and I take it that I have been inculcated with the same sorts of values as other scientists, then I might say even as a social constructivist that I have reason to accept expert testimony. But it's not obvious that social constructivists can say that alternative approaches to belief-formation are irrational. For people who have been brought up in different circumstances, or for people working in different contexts, perhaps young-earth creationism is perfectly rational. This is often raised as a challenge to social constructivism: that it has no means of showing that modern science is at all privileged over other methods and worldviews.
Whatever conclusion you come to, I'll trust you
🤣
Very important questions to consider. We must rely on experts, because it's physically impossible to become an expert in everything. Therefore, the ability to judge who the experts are is probably the most important skill in critical thinking. Though I don't think the task is so hopelessly difficult in practice.
One thing to note: just because we can't reach certainty in our judgement, doesn't mean that we can't find our experts effectively. A "probabilistic classifier" that is biased towards "eliminating false positives" will do the job just fine. A public intellectual confidently stated something stupid about a topic I know better? Into the trash he goes! Maybe I lose something of value, but I can't afford to trust him on unknown topics anymore. My experts should know what they know and what they don't know.
In practice, most influential pseudoexperts can be easily dismissed with relatively little effort. It's not like you need to be a virologist and a public health expert to see the folly of antivaxxers. The debates that are truly difficult to adjudicate for non-specialist are very specific and rarely concern issues of interest to the laymen. A skilled critical thinker will usually find some dead giveaway that a purported expert is in fact a pretender, for issues that really matter at least.
What's more, a general understanding of the scientific process helps to anticipate which results are solid and which might be dubious. Old, well-researched and often replicated results are to be trusted. Anything on the cutting edge is up for debate, even for the experts. If a result is new, counterfactual to "established knowledge", offers radically new interpretations of facts, shows signs of spurious methodology or possibility of p-hacking, is popular in the media outside expert circles, interdisciplinary and not validated by experts in all domains -- those are all factors that should make us wary. Science from decades ago that made it into textbooks is certainly right on the facts, even if they might need to be reinterpreted in a future paradigm shift.
Finally, yes, non-experts distort the views of experts all the time. More often than not, when you read an article about some scientific discovery and go to the source, it turns out that it says something else. Usually the same claim, but with plenty of restrictions and caveats, but sometimes something straight up completely different.
I completely agree with you on the importance of relying on experts!
However, I'm not confident in our ability to identify those that deserve our trust. Your general rules have some potential but I can see clear ways that they may fail dangerously.
@@mathiasrennochaves3533 Like everything, it improves with practice. You can become an expert at identifying experts. :D
I believe there's a significant issue at hand. Let's take a moment to consider those who oppose vaccinations. Have you read any expert opinions from the anti-vaccination camp? I personally find that they present robust arguments and data to support their stance. They also provide a compelling explanation as to why their viewpoint isn't mainstream, highlighting the financial incentives that exist at every level-from medical education, scholarships, research, development, production, deployment, promotion, governance, to the practice of medicine itself-that encourage belief in the current vaccination paradigm.
There are two key points to consider here:
Firstly, they can demonstrate that, statistically, nearly all disease symptoms vanished when improvements in nutrition and hygiene were made, long before vaccines were introduced in those areas. They argue that the decline in disease following vaccination is based on disease diagnosis, rather than an actual reduction in symptoms within the population.
Secondly, they point out the existence of verifiable structures that instill belief in the effectiveness of vaccines from an early age, and later incentivize this belief through profit. Given these points, I believe that some anti-vaccination experts can make a persuasive argument questioning the reliability of expert consensus.
@luszczi: I suppose Kane doesn't find the following omission you made that I will discuss to be especially important given that he loved your comment, but I am really disappointed that even in a post consisting of hundreds of words, you don't even _mention_ the idea of quantities of experts as weights for belief, despite how relevant, probably underpinning, it is to the specific topics into which you delved.
If someone says something "stupid" about something which you know "better," all we need to do here is introduce the variable that is instantiated in the case that a large number of experts, instead of some small minority of them, are saying the "stupid" thing, and now you no longer know if it's "stupid" or not. Would you throw all of those experts into the trash? Note that you yourself being an expert in the field certainly doesn’t make the issue clear enough, given its presumed complexity, to be immune to the importance of the weighting question (for example, questions like “to what extent should I alter my view based on a large quantity of other experts disagreeing or agreeing with me?”).
I think the problem is that you seem to want to act as if, with these issues, there is this great way to detect something "stupid" that doesn't depend on some kind of meta-analysis of expertise (perhaps a logical fallacy they ostensibly committed that their expertise fails to disguise), yet much of the point of expert-trusting epistemology is to tackle situations in which we _don't_ have such a way of effectively detecting that. If we could straightforwardly see that something is wrong, it would just be a normal question, rather than some epic question that requires appealing to experts who have specialized knowledge.
It then might well be wondered whether what you are labeling as obviously stupid is only labeled as such because it is a minority expert view, and that had it been a majority expert view, those seemingly illogical strands in it that you believe that you detect would seem like logical ones - for example, if you think it makes x y z fallacious points (instantiations of the genus of strands to which I'm referring) now, the concern is that you might convince yourself that x y z are actually really clever and logical if instead a majority, rather than minority, of experts believed in them. The strands don’t add anything epistemically if they have no ability to influence your belief; they can then only passively take on the labels you put upon them solely based on whether they are part of a belief that you are trusting based on expert-consensus belief or rejecting based on expert-consensus disbelief.
In any event, it is simply indispensable to consider quantities/distributions of experts in various beliefs regarding the field(s) under discussion assuming that you use this to any appreciable extent in the formation of beliefs through the appeal to expertise. That goes for pointing out easy ways of discrediting experts. If you are going to say that there are easy ways of discrediting experts, you are going to need to reconcile this qualitative consideration of experts with quantitative ones. To that end, you could toggle with the weighting to favor one more or less than you previously did, but this should be done with deliberate, rational consideration. Maybe, in so doing, you will weight qualitative features highly enough to find notable instances in which you could confidently say that even a majority of experts are going astray. Alternatively, to go in the other direction, you might question your apparent qualitative weights and just chalk up their appearance of qualitative merit to just it being a logical consequence of the content of the "exceptional wrong expert" most likely being wrong simply because a majority of presumably qualified experts think otherwise, without you necessarily understanding the nuts and bolts of _why_ the experts find it to be flawed, only (to some level of plausibility) understanding _that_ it's flawed.
There are many interesting epistemological issues that can come up when thinking about distributions of experts in holding certain beliefs in their field(s) of expertise. For example, might there be some exceptionally smart expert, smarter than the rest of the experts, that holds a view that the other experts are unable to appreciate due to their intellectual inferiority to the greater expert? Perhaps the experts themselves have some kind of bell curve as it were. Maybe the bottom 80% of experts have an IQ of 130 or lower (you can, if you wish, substitute IQ for something you deem more relevant to intellectual competence without reducing this example's illustrative power) but then there is some 2% of experts that have an IQ of 160 or higher, that might be right but can't convince most of the other experts because the ones with an IQ of 130 or lower all happen to, in their peer group, find their own views more intuitive and plausible and see how much they outnumber the 2% of experts that find the opposite view more attractive. It's unclear who is being rational, here.
Perhaps the experts with an IQ of 160 or higher should change their viewpoint based on the social-epistemic evidence against them, but then, if this kind of thing happens too often, no one will ever change their views based on new information, because of course the stagnated position could then be used to defeat further new positions, which then further stagnate the position so that it can defeat further new positions even more easily, to then further stagnate the position and hence its ability to maintain its stagnation in the future due to being in this positive feedback loop, in which case we might wonder about just how much of a testament to the truth of something it would really be that an extremely stagnant position practically by design is impervious to any dissenting views to it (even those of experts, so long as only a minority at a time are endorsing such views).
There is much more that could be said about this, but it seems that, with the reliance on the quantity/distribution of experts in holding certain beliefs in their field(s) of expertise that is so ubiquitous in this topic, it ought to be carefully analyzed, or at the very least, not artificially isolated from the topic in favor of making an intellectually unreflective, unreconciled statement about being able to detect quacks.
@@derekg5563 You're missing my point, which is about experts venturing out of their ballpark, not about minority expert opinions. If, talking about something they have no expertise in, they make a confident claim that any relevant expert would typically judge as idiotic, that's a sign of flawed metacognition and/or dishonesty. An example of what I meant (to give a very clear one) might be Jordan Peterson and his sophomoric claims about global warming. Having heard that, I will not take his word on anything related to Jungian psychoanalysis (his expertise, but not mine), because I have no trust in this particular expert.
I'm giving a talk this weekend at a climate conference about knowledge, certainty and how we form beliefs and this summary of social epistemology is super helpful! Thanks Kane B, will be giving you a hearty shoutout
8:10 at my experience on all the issues where the public disagrees with Experts in any reasonable numbers The Experts are also disagreeing with one another
The basis of trusting in expertise is akin to a "golden rule" but for epistemology.
I trust experts, because I could have gone through the training process myself to become one.
I can't become an expert in every field, simultaneously, but I could (have conceivably) become an expert in any single field.
Why do I trust our universities to train people like me into becoming true experts?
Because I'm in the (beginnings of a long) process of becoming a (lowly) expert in my own field.
How can I trust that the institution I'm studying at isn't lying to me and turning me into a false expert?
Because the subject I'm studying is math.
My entire job as a student is to check (and learn) the logical reasoning behind every line of every proof of every important theorem of every course in my semester plan.
Inevitably, every exam study period, I come to the same obvious conclusion as 10s of millions of students before me did, which is that the experts wrote their textbooks and lecture notes without error.
If my professors are teaching me math correctly, then I trust they're teaching physics correctly, too. And CS, and biology, and engineering (which is also why I trust our planes and elevators and cars), and so on and so forth.
Now, the less rigorous the field (that is, the softer the science), the less certain I am of their correctness, but even in the seminars we STEM students are forced to partake in, I see how political science papers are written, how their studies are conducted, and the data analysis methods they use, and I recognize the rigor.
Imagine there's a fence 10 meters wide. You can easily walk that distance along the fence and see the other side.
Now imagine you're on one side of that fence, and your friend is on the other.
If he tells you there's a squirrel on his side, even if you can't see through the fence, you're inclined to believe him.
You trust his testimony, because you could easily verify it by walking 10 meters.
Now, to ensure that you're not cheated, it wouldn't hurt to check every once in a while, anyway, because otherwise, your friend could notice your uncritical belief and lie without any consequence. As such, it could also help to put concrete consequences for lying in place.
This sounds complicated, but it's the entire basis of the ticketing system for public transportation in Europe (or at least in Zürich).
It's an honors system that's reinforced by the fact that freeloaders could get unlucky and have their ticket controlled, and be forced to pay a hefty fine that makes the risk adjusted calculation not worth it. You're better off just buying the ticket, so that's what people in Zürich do.
Sorry for the long comment.
Math is like the only field with a good level of immunity to justifiable distrust. Any in-depth research I do into other fields has made me trust the experts of every other field less than when I started, to varying degrees depending on the field.
@@tyrjilvincef9507 Sure, I agree, that's one of the reasons why I chose to study math, but I think that's more of a criticism of those less rigorous fields rather than the experts who work in them.
To my eye, the experts are trying their best. They make mistakes, they have egos, they express emotion, they can have ulterior motives, they're flawed humans, etc. But I'm no different, so I trust them; perhaps with vastly less certainty than I would trust the proof of an important theorem in algebra, but I trust them, nonetheless.
1) I don't trust math experts. I even think that they are less reliable, than some linguists.
1.1) It is pretty hard to constantly lie about expertise in a language, which has millions of users.
2) I used to hear a lot some esoteric bullshit about gödel theorem from graduated people.
2.1) For instance, i have heard that it is somehow prove, that we could not reduce math to logic.
2.2) The problem with that claim is in defining, what it means, to "reduce one discipline to another".
2.3) Mathematicians often use sofisticated definitions to concepts, that have different intuitions in common language.
2.4) I am very suspicious of every mathematican's claim, until i would not understand every formal definition they are using in it.
3) Some graduated mathematicans also love to create some scholastical formal, higher order logic, arguments for God:
books.google.ru/books?hl=ru&lr=&id=ZQh8QJOQdOQC&oi=fnd&pg=PP1&dq=info:d7HHYpTBngAJ:scholar.google.com/&ots=901ndXj0IR&sig=PnFBwv53e4dF02JlXlthkLtvUBA&redir_esc=y#v=onepage&q&f=false
3.1) Which are full of platonic unjustified axioms.
4) I don't trust mathematical assessments of probability.
4.1) They could arbitrarily add axioms, that asumes some atomic probabilties, from which they will derive the result.
4.2) If i want to know probability, i go to the prediction market: i don't care what people there use - astrology or sophisticated models; As long as they bet on their estimates, they are accountable experts.
5) Mathematicians love claims, like "Conjunction is the same thing as cartesian product"
Where they try to watch to the same concept from different angle and highlight similarities without doing formal justification.
6) Mathematicians rarely do real formal proofs
6.1) According to:
us.metamath.org/mpeuni/mmset.html#trivia
2+2=4
Requires
26,323 proof steps. Mathematicians usually skip them.
6.2) They also love to use High order logics and type theories in proof, that are not so intuitive to non specialist, as hilbert style proof.
7) If we will assume different axioms, we will get different results. In Quine New Foundations Cantor theorem is false, in ZFC it is true.
@@TheLogvasStudio You don't trust mathematicians on what, exactly?
Proving 2+2=4 requires so many steps if you *try* to start with the smallest sensible set of axioms. Those steps aren't just "skipped", they're usually duplicate chains of reasoning from different instantiations of the axioms (or previously proven theorems).
You can use a semi-formal system to rigorously prove statements by omitting all the logically true chains of reasoning. You simply schematicize all the tautologies of formal first order logic and integrate them as axioms of your formal theory.
Gödel's Incompleteness Theorem simply states that a mathematical theory in which you can do basic arithmetic, there exist statements which are true, but not logically derivable from the axioms.
I would recommend looking through relevant sections from the introduction in Halbeisen's "Axiomatic Set Theory" for an alternative perspective.
@@TheLogvasStudio As a math student with special interest in logic, I'll say:
1) I'm assuming this otherwise unjustified point is justified by your other points.
2) Gödel's incompleteness theorem is precise and provable. most likely the other "graduated people" you heard from are watering it down to vocabulary that is hopefully understandable to you. But as with most claims in math, this watering down process is subject to omission, ambiguity, and bias. (2.1) is another key example of this - the reason for this claim and the gist of Gödel's incompleteness theorem is that you can encode formal proofs as natural numbers; and since you can also have proofs *about* natural numbers, that means you can formally prove things about what you can formally prove. By considering a self-contradictory statement (analogous to a liar sentence - "This statement cannot be proven"), we see that there are (mathematically) true statements that (logically) cannot be proven - this is the disconnect between logic and math. In other words, this is the disconnect between what is true (as statements about mathematical objects like the natural numbers) and what is provable.
As a separate point that might be more relevant to your experience, many graduates have gotten comfortable with the rigour of math and are conversing with/about notions that are abstracted far beyond the nitty gritty rigour of math, and may reflect their personal perspectives and goals instead of the objective math. They may use this same overarching language with you, but it's understandable that they seem to be living in a different world from you.
All these claims they make should be able to be translated back to rigorous steps if it ever comes down to it.
3) I agree with your analysis (3.1). I think what's going on here is that the author(s) is/are mixing math with philosophy and religion. The math (i.e. the inferences of further facts from the axioms) is sound. (Well, should be anyway.) It's the axioms themselves that are philosophically/religiously justified, not mathematically.
4) I find my answer to this point similar to that of (7).
5) That's probably some people trying to be too edgy with their math knowledge and drawing more similarities between mathematical ideas than is "necessary" (in quotes because that's a subjective judgment). There CAN be mathematical contexts where these things are "the same" (most likely one is defined as the other as a way to implement one mathematical system in another), but it's only in a certain context.
6) I think this point is where you have the most misimpressions in. A simple analogy I would make is that "formal proofs" are like programming in assembly language - it is overly tedious, not human-friendly, and not necessary, when we have higher-level languages like Java and Python available. Higher-order logic (second-order logic in particular) is what I feel is much closer to ordinary mathematics than most first-order systems.
6.2) I think you're dreadfully wrong that Hilbert-style (first-order logic) proofs are more intuitive than higher-order logic or type theory. If humans needed to perform 26,323 proof steps to conclude that 2+2=4 then we would never have gotten past the stone age.
Also, the number of proof steps depends heavily on the formal system you're using. (I assume you do know that the notion of "formal proof" is with respect to a formal system, and that formal system can be specified whichever way.) The huge number of proof steps is due to starting with a formal system that does not natively define and give axioms for arithmetic to begin with, instead starting with mere sets (which it has to prove theorems about first).
7) Yes, that much is very plain to see - math wouldn't be very useful if it couldn't even prove (or even can disprove if you're feeling wacky enough) basic properties about the natural numbers like associativity and commutativity. The choice of axioms isn't mathematically justified per se, but rather motivated by certain mathematical and perhaps philosophical goals - imo one goal is just tradition, and another is a desire to have a mathematical universe (the von Neumann universe perhaps) that can implement all other mathematical systems and express everything that mathematicians want to express. There is a certain appeal to make the mathematical/logical system as simple as possible (as in accepting the axioms that would align most with intuition) yet as expressive as possible as well.
In the end I empathise with you, because I have very negative views about philosophy based on whatever bits and pieces I can glean from random readings/videos as well. I think you've been a victim of pop math gone wrong... or at least, there are misunderstandings that have to be clarified before you can grapple with advanced foundational questions about math. Math is very similar to philosophy in a lot of ways (they are perhaps the only armchair disciplines in the whole world).... just that I feel that philosophy never actually settles down to definitions that are both consistent/objective and has deeply examinable/analysable behaviour the way math does, and so I still can't see the sense in it.
But are believes really necessary for praxis?
Instead of believing that anthropogenic climate change is true or empirically useful, could I not simply engage in climate activism because I desire to do so?
What would believes add to action?
I think any smart approach is to do a little bit of your own critical thinking and a little bit of trusting experts depending on the context like immediacy and your own competence.
Naomi Oreskes's book "Why should you trust science" discuss the topic and bring some historical examples. It's pretty nice, there's also some talks in youtube.
Very instructive and helpfull video .. the best on the topic. Many ideas I have been thinking about, you aeticulate them very precisely.
There also another aspect .. that is even experts themselves don't how they are right or what is the ground of their conclusions, it's almost a mechanical subconscious thinking that they acquire through experience it becomes very reliable with time.
You can also look at it in errors- with a wager. In the instance that climate change isn’t happening but we believe it to be, acting upon it will not necessarily be as harmful as if it is happening and we believe it not to be. A type II error is far more harmful in this case than a type I
In those fields that I’ve had the time to study professionally I’ve come to realiE how wrong the experts I used to listen to were. At this point in my life I give the advice of experts in other fields a value slightly above a layperson but not more. Been burned too many times.
What if we access the probability of reliability by a track record of the relatively high amount of successful results (without necessarily accessing the 'success' by scanning through the papers themselves, but rather through practical applications like engineering and medical procedures to which the scientist have contributed) produced by scientists through the training provided by institutions? As there are less institutions to decide between then experts. This seems to me to be a skillful way of going about this problem eventhough it is by no means perfect.
@Kane B my BELOVED!! ty for the video, really interesting so far! Btw didnt believe in stance independent evaluative facts until i considered the statement "kane is awesome"... now im a realist :.( Anyway would love too see more debates and QaA stuff! Hope you have a good day (tips fedora)
Thanks! Though I'm sorry to hear you turned into a realist, lol
And when it comes to bias and deceit you must of course also consider that scientists might very well have their own biases. If you consider that the average scientists is highly educated and at least wealthy enough to afford that education you can already see potential biases. Them (subconsciously) favoring research, hypothesis, and conclusions that would favor people like them. Alternatively scientists can favor NOT doing research that could get them into trouble.
Speaking generally I'm still very much pro expert consensus. But it's important to learn why people who are anti-expert consensus come to the conclusions they did.
I would love to hear your opinion about regarding Bayesian inference to the credulity to attribute to experts. Also how does this fit into the need to make/take decisions based on what experts opine. ie. can't we add an option between the ones you briefed where we combine our own direct data together with reports of expert opinions to arrive at an estimate for the likelihood of the proposition being true?
Any video of yours that I steel myself to watch, I leave with the conclusion "Well, bloody hell - no one can ever really know that they know anything, can they? What a mess."
I'm never sure whether to find it liberating or depressing, or both. I feel depressed, I know that much (or do I? Oh no...)
My instinct is often to retreat to something like "I suppose we can all only do what we can, and that will have to be enough". But it doesn't really cut the mustard emotionally.
I love your simple titles
I wish you've explained what scientific antirealism would add to this topic. I mean, if S is a scientific antirealist, it seems they have a reason to simply reject any expert testimony on any scientific explanation of any scientific issue. Is that an accurate presentation?
I tend to do this because my knowledge is limited and it’s impractical to become highly educated on every subject. It’s a pragmatic. Is this problematic?
Great video, but I'd contest that agnostic isn't quite the same as simply skeptic imho. The agnostic concludes indecision but the skeptic keeps prodding the new information for holes.
"I literally don't care about facts or science or books beause I'm based. I believe what I believe because it sounds good and because its based and red pilled." -BG Kumbi, expert epistemologist
in my expert testimony this video is .....awesome👍
Cool!When will the next Q&A be?
I would also like to know for I have many questions
There isn't a specific date... I usually do them once every four or five months.
Yet the people would prefer to have them every 3 months. Are you not a proponent of democracy sir???? Speaking of which, that could be an interesting video topic
@@noah5291 You’ll have to wait for the next Q&A to ask him that.
I'm still bothered by the fact that it's Verity and Sydney and not Verity and Falsity.
who is Falsity Newman?
@@KaneB I don't even know who Sydney Newman is
think about a machine that has all the knowlege that humatity has and answers your questions %90 correctly but you dont know how this machine finds out this answers so when you ask a question to this machine like is climate change occuring and it answers but you dont like its answer because you have serious doubts about this answer and you make some research and even after this research you still think the machine's answer is false so would you believe this machine is right or would you say this answer is just one of the %10 false answers
well my answer is i should trust my own ideas because i know how i found out the answer but i dont know how the machine found out the answer so altough according to another person the machine is more trustfull it is not for me because i can follow every single step that i made to reach this answer but i cannot follow the machine's steps
so this discussion is not between a nonexpert and an expert it is between me(the only mind that i can reach in the entire universe) and a person who has high probability about tellin the truth
hey kane love the video, but can you cite the sources mentioned in the video? I intend to do further reading
I state the sources in the video -- e.g. Goldman's "Experts: Which ones should you trust?", Huemer's "Is critical thinking epistemically responsible?", Sovacool's "Exploring scientific misconduct". Is there a specific part you want further sources for?
@@KaneB thanks kane I just mean I wanted links in the description so I don’t have to scroll through 45 minutes to find sources.
Keep up the good work you’re my free philosophy teacher
Another way to view the theory of expert testimony is through organisational interdependence. I tend to trust methodology for a current professor/instructor in a field (say the mechanism of deep learning in AI neural networks that bring the marbles of generative pre-training transformers) more than a professor who has left the field and is working for the corporate sector like Google with U tube as its subsidiary. So organisational experts as a social categorisation is intrinsic to self categorisation within a system of belief which turns out to be a political community that tends to bias judgement towards group skepticism ( so not experts as a bandwagon approach, but sort of expert within a field whom I share an eschatological sense of place, such as theologians who advocate world came into existence 5000 years ago but more inclined to go with the theologian who advocates cyclic time, or not agree with the particle physicists who claim wave function collapse but do sense an affiliation with multi wordlists and decoherence theorists and so on) as in the case of climate change and AI existential threat as real. So in many fields there is controversy as in AI and I might end up holding off opinion until at least I sense I have a grasp of core concepts to go with a political community of thought. The other alternative is to have a bumper sticker 'I fish and I vote!
Hey Kane. Thanks for all the great educational content. It's almost like you are some kind of philosophy EXPERT. Hmmmmmmmm
I don't have a clue what I'm talking about most of the time lol
Any vid on Sextus Empiricus or Pyrrhonism? Or Nagarjuna?
With respects to climate change, one thing that might be interesting is that it's an issue relevant to many different fields, and many respectable high status qualified experts in one field can attract negative attention from experts and more lay people in other fields, examples might include Richard Tol, William Nordhaus, Sam Fankhauser and others, whose work has attracted lots of negative attention from those more alarmist than them, despite the fact that they are clearly leading experts in the field of estimating the economic impacts of climate change and social cost of carbon etc. Nordhaus even has a Nobel for his work integrating climate change into long-run macroeconomic analysis. Yet seems far from being immune to both low quality and high quality criticism.
ahh yes, william "Even a 6°C increase in global temperature would reduce GDP by just 8.5%, because industries that account for 87% of GDP, are undertaken in carefully controlled environments that will not be directly affected by climate change" nordhaus.
atrociously idiotic assertion based on biophysically delusional economic modelling.
@@real_pattern Exactly what it means to be an expert in a certain field these days. You are completely cleared to stick solely within the models approved by your profession even if they completely contradict the achievements of dozens of other fields which show those models as utterly absurd and completely delusional.
Very interesting topic
Is it possible to listen to your videos on google podcasts ?
No I don't have that. I'll look into setting it up.
@@KaneB cool thanks :)
Kane b is an expert in epistemology and philosophy of science, so let's trust him on this issue
The proposition I'm most confident about is that you shouldn't trust me about anything.
@@KaneB any expert makes some errors. I think this one is one of your very rare errors. Paradox avoided.
In my life I understand that I don't really give a fuck if it's an expert or not. I always tend to bounce new data off of my own experience, understanding, knowledge, etc and if I don't agree a white coat writing or speaking words won't change itm I can be pretty unmoving unless faced with something so overwhelming and undeniable I have to acknowledge it.
My view here is that, my own trust of scientific experts is irrational. There isn’t any non-circular reason for me to trust them. It reminds me clearly of the old problem of the criterion and has all the same difficulties.
One possibility is appealing to the success of science. I can trust scientific experts because they have succeeded so far with all their amazing technological gizmos. I think that’s probably what has most of us trusting scientists - the apparent effectiveness. It isn’t clear though, that the apparent success of niche technologically relevant areas of science should license a wholesale trust of everyone who gets called “scientist”. That strikes me as a very shakey inference, with a lot of faith needed to sure it up. It isn’t like all scientists use one successful method. They all use legions of very different methods, so how can the apparent success argument generalise across all of them? I think our culture have a sort of blind faith in “scientists” these days.
X is the unknown, and a spurt is a drip under pressure. So an expert is an unknown drip under pressure.
After fainting suddenly a few times, I went to the doctor. My heart rate was around 46/50 per minute. The cardiologist told me that I was in perfect health and recommended that I see a psychologist. I would like to ask you, if I were a fool who believed in "experts," I'd do what this doctor told me? Believing is one thing, evaluating the opinion of an expert is the attitude of a rational person. Take, for example, the crisis in psychology due to the fact that a minimum of 35% of its research is fraudulent!!! More on that, when I was young, the experts published a lot of docs saying that by the year 2020 this place would be under the water. Well, I am still here! Remember, the Church during the Middle Ages was a source of the experts as the media is now.
Just a point: you are using a sophism: should I believe on experts or crazy people like creationists or flat earths. Here is an example: "Should you, people, believe in Stalin who wants order and prosperity for us or on those drug addicts who wants freedom, unrestricted freedom to use drugs all day long, who wants the right to vote on candidates who wants democracy in order to allow all people to use drugs all day long and so to destroy our country? Read the books of Soviet Academy of Sciences, read our experts! What do they say: democracy is a weapon of the imperialism!"
drinking game idea:
Take a shot each time Kane says "expert". The experts recommend not to play this game since you'd likely die by the end of the first section.
Ah, but you see, I don't have the same values as the experts.
We are all earthbound misfits, trapped in the who, what, why, where, when and how.
Yet it seems that no matter what we think, question and theorise about, there is more than enough data to substantiate some of the claims.
Even an absence of something is proof an absent something else😁
Yes we can measure in an ever increasing way but can we really explain ?....are experts really fulfilling that role ?
Awesome video as always, Kane!
Possibly your best video in my humble opinion
You can't trust most people.
I think the problems of the expert-layman relationship open a very large door for skepticism. Society is structured around a division of labour that separates most people from the production of knowledge. The production of knowledge itself is divided into various sciences. These again are divided into many specialties. The way how knowledge is distributed across the members of society thus can be described as: a whole lot of different expertises on various special matters, most of them practical, some theoretical. The way how ignorance is distributed in turn is different: Almost everybody is ignorant of almost everything - except for one's expertise. (It is fairly well established that introducing specialisation into the manufacturing process raised output immensely, for which it must almost be regarded as important as the introduction of machines. But if we transfer this assumption to the production of knowledge, we arrive at a certain kind of conondrum: The knowledge of humanity can only grow at a sufficient rate by humanity splitting into specialties, thus by the individuals comprising humanity increasing their ignorance in proportion to the expertise gained by the relevant experts. (After all, being an expert requires focus on one thing to the detriment of everything else.) While the sum of knowledge might increase as this process continues and deepens, the sum of the "unknown knowns" (i.e. of knowledge known by almost nobody) grows at the same rate. Put differently: Since everybody is a layman about basically everything, the growth of knowledge and the growth of ignorance (of unknown knowns) are identical under specialisation. Collectively speaking, what we know might already outnumber the things we don't know (but I can understand doubt about that assumption). Individually speaking, what we don't know certainly outnumbers what we know to a towering degree. And this situation will only get worse as specialisation deepens.
If this thought of mine was a bit more... visceral in nature, I would simply doubt everything (on the grounds of being, by the inner workings of the productions of knowledge under the division of labour, separated from knowing almost everything.. and this is true for everybody, even for experts on certain areas of knowledge themselves.)
I think the challenges posed by this kind of skepticism are by no means small: For instance, if I am not an expert about a science and I am doubtful what to believe, I might cling to the opinion of expert a or b based on assumptions about how science works and how I should evaluate the expertise of a or b based on those assumptions. But do I know how science works? I am separated from science and so I don't have direct knowledge about its inner workings. I am again dependent upon experts to tell me how science works/ is supposed to work.
cus a few of them made cool thingies like phones and cars
Thought-provoking
"Why trust experts" probably isn't phrased correctly. Why trust priests? Why trust l ron hubbard? Why trust popular streamers that have made it (inherent 0.1% success rate).
From my understanding, there are at minimum philosophical, methodological, and structural differences between those groups mentioned above and 'certain categories of experts'. I don't know how to phrase the question correctly but I think "skin in the game" highlighted an interesting phenomenon. Certain experts die if they fuck up. Other experts are rewarded. Therefore they are not the same kind of thing.
really very good
13:11 *nervous laughter*
well…
Anybody who claims something, should be willing to answer questions. If a scientist cant be asked a question, that's sus as hell.
If you want to know something then just learn it, or make your "belive" theory, something like game theory, idk
What if a scientist has an investment in solar panels and wind turbines.
Wow that's really relevant after yesterday's debate between Farina and Tour.
All an expert can do is point to information for you to look at. An expert shouldn't be telling you what to think. They should be able to explain how and why their opinion is valid in a logical manner. If they cannot do this, they either lack the ability to effectively communicate, or they are intellectually dishonest. And nearly all university-eductated experts are Pavlovian conditioned to accept ideas they should be questioning.
We shouldn't trust experts.
Expertly said.
Theres no reason to trust anyone or anything, even your senses
So,your comment is meaningless.
@@dimitrispapadimitriou5622 Meaning has nothing to do with Truthness.
I'm old so you can trust what I say.
I also have 3 degrees in flat earth.
Yes! Naive realism.
lay people trusting experts have a similar epistemic analogy to people believing in prophets
At the moment it is the phd in economy believe in climate change and the PhD in atmospheric science with a lot of experience is against climate change.
Sorry, modern skepticism is not suspended judgment... it is rather a midway to critical thinking, it is considering the evidence and coming to a provisional assessment with a full conscious acknowledgment that I may be wrong and will revise my belief at every credible piece of new evidence...
Using a classical definition of "skepticism" is just committing a black and white and gray fallacy... there are other options.
He used the term in the standard way I see it used in academic philosophy. What you're describing just sounds like reasoning. Philosophers don't tend to use the term "skeptic" like that.
As Aaron said, I'm using the term in the way that's standard in contemporary epistemology. But if for some reason you don't like seeing the term used that way, that's cool. We can just use a different word. It doesn't particularly matter how we label things.
@@KaneB Thank you for replying,
It is not really the term I object to, rather that "There are three strategies", excludes
the view of "modern sceptics" but includes a skeptical view I think few if any hold to.
You clearly expound on the issues in the rest of the video...but then fail to give a strategy to deal with the issue (which is not one of the three presented btw).
Critical Thinking and Skepticism as presented are not viable strategies.
On your list, this leaves only deference, and practically people select the purported experts based on social validation, not on technical merit. Once selected they make these beliefs part of their core identity, and resist change at all costs - that is a problem.
So the better strategy to deal with the issue, is to acknowledge that we cannot suspend belief, that we must take action, but that we do so tentatively, without making it part of our core identity, that we remain open to evaluating alternatives, and new evidence whenever it is presented.
Maybe "modern skeptics" is not the correct term to use for this view, I would accept any term...but add it to the three listed.
@@aaronchipp-miller9608 Then "Just sounds like reasoning" should be added to the list "There are three strategies". It is not about the term used.
@@truthseeker2275 Few people are global skeptics, but I think the skeptical approach to pretty common with respect to specific propositions. Most of us do suspend judgement about some of the things we think about. For instance, I have no idea whether there is intelligent life in the Andromeda galaxy; I suspend judgement about this.
I'm not sure how your strategy is an alternative to the three presented. Neither critical thinking nor deference need involve becoming certain that P, or making P part of one's identity, or resisting changes at all costs. A person might provisionally accept P on the basis of critical thinking, or provisionally accept P on the basis of expert testimony. The issue is about the source of one's belief, not the strength of one's belief.
I wish you've explained what scientific antirealism would add to this topic. I mean, if S is a scientific antirealist, it seems they have a reason to simply reject any expert testimony on any scientific explanation of any scientific issue. Is that an accurate presentation?
It's a good question. The realism/antirealism debate relating to empiricist challenges is tangential to this issue, I think. Take scientific realism versus e.g. van Fraassen's constructive empiricism:
(SR) To accept a scientific theory is to take it to be true in general.
(CE) To accept a scientific theory is to take it to be empirically adequate, i.e. true of the observable phenomena.
A scientific realist and a constructive empiricist might accept the same scientific theories, on the same grounds: that is, they might find the same evidence compelling, or be persuaded by the same experts. Where they differ is on the question of how much this theory tells us about the world.
I could be a realist who is extremely skeptical of expert scientific testimony. Consider a person who rejects the expert consensus in various fields, but who still believes that various theories uncover the facts about reality. Most young-earth creationists, for instance, are realists of this type. By contrast, I could be an antirealist who is extremely deferential towards expert scientific testimony. I might blindly accept whatever scientists tell me, but then favour a philosophical interpretation of their claims in line with constructive empiricism.
By contrast, the realism/antirealism debate as it relates to social constructivism is more closely intertwined with questions about expert deference. Social constructivists tend to say that what counts as evidence, what counts as "proper" scientific methodology, is a matter of social negotiation; there are no independent facts about how some data is to be interpreted or whether it really supports a given hypothesis. Now, it doesn't follow from this that a social constructivist has a reason to reject expert testimony. Indeed, if I take myself to be part of the scientific enterprise, and I take it that I have been inculcated with the same sorts of values as other scientists, then I might say even as a social constructivist that I have reason to accept expert testimony. But it's not obvious that social constructivists can say that alternative approaches to belief-formation are irrational. For people who have been brought up in different circumstances, or for people working in different contexts, perhaps young-earth creationism is perfectly rational. This is often raised as a challenge to social constructivism: that it has no means of showing that modern science is at all privileged over other methods and worldviews.