I think this is a brilliant explanation of a very important fact that changed my life a few years back: We're so very limited as human beings. We think we "know" so much, but really we are making assumptions daily. Yes, those assumptions work 99.9999999 (to an extreme number of places) percent of the time. It doesn't change the fact that "almost entirely completely forever always always always" includes the word "almost." And for me that's vital to remember, because it allows me to judge others a little less quickly and hopefully with a lot more compassion. We're all this way. I think when we start allowing the pride of personal assumption to become 100% cold hard truth we begin to close ourselves off to possibilities. Throughout human history we've made assumptions- scientifically or otherwise- and time makes fools of us all. There's SO much we don't know that it's vital to keep our egos in check and always remember we are but humans, with such limited understanding on a tiny little ball of dust that doesn't even register on an image of the entire universe (not to mention potential other dimensions or timelines which are more likely to exist than NOT exist). Thanks for a great video, it's refreshing to see people steeped in science recognize that it is not some almighty, perfect construct, and like the humans who invented it as a means of explaining the world around us, it has plenty of flaws. When we know this fact, we can live more freely, understanding that maybe- just maybe- none of us have any idea what we're talking about. So maybe we should fight with each other a little less, and care about each other a little more. :)
I'm all for fighting less and caring more! But to be fair, there are an awful lot of people who believe an awful lot of stuff that's not just not-perfect, but demonstrably antagonistic toward a practical understanding of the universe. Humility is great, but even compared with the extraordinarily vastness of the universe, some views are more frequently right than others. ;)
THUNK At least, "right" in our own minds. ;) I think in the end, it all comes down to intention. Something can be antagonistic for the right reasons and for the wrong ones. Unfortunately until we get science to hop to it on that "Intentions Meter" I keep bugging them about, it's probably best to err on the side of caring about everyone because they very well may believe they are doing the best thing possible. After all, is that not what we ourselves do? (Unless we're talking, say, the Jokers, who want to "just watch the world burn." And I'd imagine even those folks could use a hug or two.)
My dad likes to tell the story of how he was a Math major some ways into college and finally threw up his hands and said "forget it!" when one of his class started with "1 is assumed. Nothing in math can actually be proved. It is all assumption based on the fact that 1 exists." He became a computer science major instead. ^_^
As a philosophical empiricist I find myself arguing with almost every person on youtube (or a philosophy degree nowadays) where rationalism seems to be a religion. Half the video makers seem to be rationalists who insist experience is inferior to deduction and the other half seem to be just unaware what rationalism and empiricism even are and think rationalism is science. As for your video it's nice to find a vid I completely agree with once in a while. Nice presentation too. The partial application of Munchhausen's Trilemma to Induction is good too. The Trilemma getting a vid to itself would be nice. One discussion point though, couldn't we argue the adoption of induction by people is not an assumption (as you say) which implies the decision to use it is a rational one, but instead is instinctive or gut reaction? Especially as animals seem to have no problem adopting induction either.
I think the hard-line distinction between rationalism & empiricism gets complicated at a fine enough resolution, especially around the sciences - a lot of the advanced mathematics used in physics isn't strictly empirically based, it's more of a rationalist undertaking of deducing rules of logic & applying them to empirical data. It's probably easier to just say "trust repeatable data more than you trust your intuition about what the data ought to be" & leave it at that. ;) I have no real problem with induction as a means of accessing truth in most contexts; we have some intuitive sense of when it's applicable & when it's not that's granted by experience. But many philosophers *hate* that it's such a practical success without any good reason for working as well as it does!
THUNK It is true as you say, science fudges over the borders of rationalism and empiricism. I often suggest to scientists they stop calling them The Empirical Sciences and start calling them The Quite Empirical Sciences. However in the purely rational sciences the theories seem to change every few weeks, whereas strongly empirically supported sciences, theories last decades or even millennia. If you needed open heart surgery and a doctor said to you, I have a choice of two operations, the first has never been performed but the finest minds by logic and mathematics have deduced it would have the best possible outcome for you, alternatively there is an operation that has been performed successfully thousands of times around the world. Which would most people choose? All science is by no means equal. Deducing the rules of ligic, I have had many discussions about that. Munchausen's Trilemma again. If you make the deduction to use multivalued logic to analyse a problem, which system of logic did you use to make this deduction with?
That reminds me of the time one of my friends mentioned CERN "discovering the Higgs Boson", then I corrected them by saying "they observed and peculiar energy signature that is consistent with the theory underlying the Higgs Boson to a statistically significant degree." It which point I was called a "nerd". But I guess in a David Hume sort of sense, that same thing could be leveled about chairs or the color orange just as well. "It looks like a chair, by all deninitions of chair that I have."
When you get down to it, every statement about anything has to have a massive chain of disclaimers & specifications ahead of it to be totally accurate. (Assuming my senses report some objective reality accurately, assuming I'm not dreaming, assuming that this sense data is not being interpreted inaccurately due to personal bias, for a certain definition of "it," for a certain definition of "is...") We just generally cut to the chase & skip that bit for the sake of practicality, although how much awareness people have of those assumptions varies a great deal. Still, if you go pointing them out to people who know them, they will call you "pedantic," for some definition of "you" and some definition of "pedantic..." ;)
Alzeranox I haven't personally, no, but I have left it as an exercise for the reader. ***** It takes until halfway thru the second volume of the trilogy!
I would argue that the uncertainty inherent in the problem of induction as you laid it out is actually one of the greatest strengths of science. It imparts the idea that there is always more to learn and it's what allows for progress to be made. As soon as one is certain of his own correctness, he stops looking for answers. With that in mind, is not this uncertainty inextricably linked with the very heart of scientific inquiry?
The problem of induction is not about the fact that inductive reasoning is probabilistic rather than certain, but instead about the fact (if Hume was right) that we have no reason to believe inductive reasoning at all.
Inductive reasoning is a dead end, which makes science a dead end. The best the scientist has is hunches, guesses, probabilities, animal faith, instinct, belief...not certainty.
yup. this tends to be a problem with epistemology in general. the nuts and bolts of it is that we don't actually know if any of our perceptions are true or not. We know that we exist in some fashion, but concretion beyond that point is impossible. I just choose to be an empiricist out of pure practicality, if not necessity. It's impractical to assume anything other than that you can at least somewhat trust your perceptions, because our existences as we know them are defined by what we perceive.
This channel is really neat, thanks for putting up so many interesting videos! A qualification: I agree, science is the best thing we've got to know things about the world, there's pretty much nothing like it. I'm not a rationalist nor do I think things such as intuition are very reliable outside of extremely narrow and idealised contexts. We agree completely on the value of methological naturalism and the massive and obvious benefits it has and that it tracks towards the truth of things better than anything else people have tried. One thing on the problem of induction that I think was only lightly touched upon in the video is that the problem isn't solely attacking the notion of absolute or 'foundational' certainty in the objectivity of scientific conclusions. Absolute certainty definitely isn't something you can arrive at through doing science. The problem also points out that there is no fundamental, logical connection between events we observe occuring in any sense AT ALL. As you state in the video, correlation is the only thing we can empirically observe and the cause comes by inductive inference. With this in mind even our notion of empirical probability is ITSELF based on the connection we infer from the past to the future, that things which happened in the past must happen in the same kind of way in the future: It's just an inductive inference like any other (and as you pointed out a working assumption). Every way of attempting to justify the assumptions underpinning our notions of causality themselves become subject to an application of the same line of questioning ad nauseum. The sense I get from this problem isn't so much "nothing is certain" as "nothing is certain AND not even degrees of certainty can be justified empirically". It's like an acid that erodes through the idea of induction itself. It's not just that we can't be certain everything that's happened in our universe so far is a series of unconnected events, it's that we have no justified way to argue that the alternative is any more likely at all. I see similarities to the Is-Ought problem and the problems with Foundationalism in epistemology. There's perhaps also something to be said for the way in which problems of this nature have within their premises a strict separation of one concept or object from another. Future from past, cause from effect, object A from object B, observation event in your mind from inference event in your mind (this last one is a particularly bold assumption about how the mind and observation function). That brings to mind Zeno's paradoxes which seem compelling on the surface but pretty silly with our modern understanding of how the world works. It also reminds me of the Analytic-Synthetic distinction in how it makes very clear-cut, incisive divisions logically and empirically. Maybe the tools turned towards resolving those other problems can be applied to the problem of induction somehow. The one damning thing about problems of this nature is that to accept them wholly would entail accepting a lot more unpalatable conclusions, perhaps to the point of extreme solipsistic skepticism. If I can't trust the future or past, then I can't trust the existence of inductive inference to unseen entities like electrons or oxygen atoms.. then if I can't accept an inference to something unobservable then I also can't infer the currently-unobserved-but-observable-in-principle. I can't justify that the world doesn't just pop out of existence when I'm not attending to it, including the majority of my own body and my own past or future. Maybe arguments against Cartesian solipsism work to a degree against the problem of induction as they both effectively lead to the the same sort of conclusions for very similar reasons. Maybe they're the same problem dressed up in different clothes. Just thinking out loud. This problem hurts my brain.
The only issue I have with probabilism being concluded (which is the idea that we never achieve *infallible* truth, just *probable* truth) is that probability is dependent on the very concept of knowledge - how much knowledge we lack to make judgements on a particular situation. The reason we say there's a 50/50 chance that a coin will land either heads or tails is because we don't have the specific knowledge required to assert an infallible claim. However, upon learning that the head side is heavier and that aerodynamics favors landing it on its head, we are inclined to say that there's a slightly higher chance - perhaps 52/48 - of it landing on heads. Now, the issue with probabilism as an epistemological basis is that since probability is dependent on knowledge, you can't have knowledge depend on probability as well... probability is not "out there" in the world, it cannot be a justification factor for achieving knowledge. Unless you consider them both necessary conditions for one another and then probability and knowledge become the same exact thing ¯\_(ツ)_/¯
I wanted to make a comment about half way through the video but I am really tired and forgot some of what I had wanted to say and can't make sense of it anymore so I am just going to say thank you for these videos, I love them. They tend to be very thought provoking and enlightening
There's a lot of fantastic discussion going on here and I'm really sorry you've had to deal with some of the not-quite-as-fantastic discussion. It's not a particularly difficult concept to grasp, but it seems like some people are having trouble with it. Maybe this'll help. Induction is a bottom-up method of reasoning where you start with observations, find patterns, and eventually formulate a theory. Deduction is a top-down method of reasoning where you start with a theory and perform experiments to test the theory's validity. Dropping a coin from a few feet off the ground 100 times results in the coin hitting the ground every single time. From these observations, we isolate patterns: the coin falls when dropped; he coin continues falling until it hits the ground. We form a theory: when we drop the coin, it hits the ground. But here lies the problem. The only way to conclusively prove this theory is by dropping the coin an infinite number of times, but since we can't do that (for at least a couple different reasons), there is no way to deductively prove this theory. You'll always be able to drop the coin one more time, so you can't prove the theory to be true even if for all intents and purposes it is. As Josh said in the video, science is based on unprovable assumptions and therefore is not strictly objective. It's the best we've got, but 100% efficiency doesn't exist in anything.
I think the problem is that this argument sounds an awful lot like the argument which believers in absurd things disproved by science use to justify their beliefs: "You think science is so hot, but it's as arbitrary as anything else, therefore I don't have to believe in it because everything arbitrary is equally valid to believe." Maybe they should watch the video before this one before deciding that that's what I must be saying. ;)
+THUNK That's exactly what I was thinking. It's really difficult to convey nuanced points of view over text, let alone with strangers on UA-cam comment threads. By the way, I still have all of the THUNK Show stickers and I still haven't been able to find something holy enough to bestow the sacred sticker on. Will update.
Concerning epistemology: Knowledge is the belief that certain assumptions are substantiated. All knowledge, including facts, are based entirely upon presuppositions and assumptions, however substantiated they may be.
Except science isn't about accessing truth. It's about eliminating what isn't true. It's impossible to prove some pattern observed between two events will continue based on information gained related to said pattern, because you need infinite data points to do so. You can however disprove patterns with just one sufficient counter example. So the scientific method is a way to find counter examples. For any upholder of science, their greatest day will be one where everything they had ever known was debunked. It would mean science is progressing!
And progressing science doesn't get you any closer to "truth." When science progresses and we learn something, it only reveals more questions that we didn't know or have any basis to ask. The 'truth' that supposedly is at the end of all of this is unreachable. You would have to answer all of the infinite questions first. So I think it's insane to aspire to finding truth. The best we can hope for is the process to find patterns which are false.
Once David Hume drives along the country side with his wife. Suddenly they see a herd of sheep. Look! his wife say. They sheared all these sheep. No, no... Hume say. It's only the side which we can see is sheared.
This is one of my favorite jokes! I heard a slightly different version. An engineer, a physicist, a mathematician, & a philosopher are riding a train to a conference in Edinburgh, when they see a black sheep standing on a grassy hill. Engineer: Huh! The sheep in Scotland are black! Physicist: Well, SOME sheep in Scotland are black. Mathematician: Well, AT LEAST ONE sheep in Scotland is black. Philosopher: Well, ON ONE SIDE.
Very interesting video! There is a myriad of assumptions any ideology has to take in order to get any semblance out of the world. My biggest question has always been, "how many of these are just _human_ assumptions and don't apply to the universe at all?"
"Necessary" 1:18 ;) perfectionist? 4:35 which is really a good thing. I've heard Tom Campbell, a former NASA physicist talk about how at some level, everything is subjective. Thanks for the upload!
This also extends to those that claim they have an objective morality. You are a subjective human and you've subjectively chosen a set of morals that you think are objective.
I quite like Nietzsche's snarky versions of this statement in "On the Prejudices of Philosophers." >They all pose as though their real opinions had been discovered and attained through the self-evolving of a cold, pure, divinely indifferent dialectic (in contrast to all sorts of mystics, who, fairer and foolisher, talk of "inspiration"), whereas, in fact, a prejudiced proposition, idea, or "suggestion," which is generally their heart's desire abstracted and refined, is defended by them with arguments sought out after the event.
I don't know, I kinda feel like this makes the assumption that science is about finding "truth"--as opposed to simply learning how to make useful predictions about the world around us, through observation and experimentation. Science is NOT about finding ultimate truth, it's about becoming progressively "less wrong" by ruling out certain possibilities, while never having absolute certainty. With science you always (and only) end up with the "best existing explanation given all the available information." The billiard ball example allows us to formulate a useful, real-life prediction that the other ball will move according to a certain equation, when hit. If, one day, it moves in a way that is opposite of what is expected, then one could devise another experiment (or series of experiments) in an attempt to find the variables that reliably produce this new, opposite result. Assuming the new experiments are successful, this would be an example of becoming progressively "less wrong" where we now can understand what factors most likely produced the unexpected and rare result. Lather, rinse, repeat. And throw away the notion of perfect certainty.
Maybe one can look at causation as an abstraction of a single reason with separate parts. Okay that sounds like a tautology. But the point is that the only reason why one might think there is a gap is because of separate parts. But if one were to consider there being other results, one doesn't say there is no causation, one says there are two causations instead. So one can't really say there is no reason for causation being true. Because in a way one is thinking of only one thing, rather than trying to close the gap between two. If one considers something completely random happening, that happenstance would be its own reason. So one can't say that there will always be a kind of causation, but one can generalize specific ideal conditions. That sounds like a truism, but it seems the problem has shifted. Because one can say "it is true there is causation, and it is itself sufficient justification". One still can't over-generalize a kind of causation or conditions being the same. The latter implication is the same as before. But the problem of the reason doesn't exist. So it seems to me... I suspect it is probably what it means anyway, if there isn't some fault in what I said. Or at least it could be the usual application of the problem. Another suspicion of mine is that the difference between what I said and the actual argument is that I'm not talking in pure logics, but am already taking a specific, observed world for granted (no matter how justified it may be, i.e. part of "what actually happened", even independent of an abstract "theory", as in what your brain can take care for you to be able to act and react etc.).
The philosophical concept you're very eloquently describing (or at least pointing at obliquely) is called "a priori," basically "the things we can know intuitively to be true without needing to look at the world." If "one can't think of no causation" is an indication that no-causation can't exist, that is an a priori truth. (The lines get blurry between a priori & a posteriori truths around the mid-20th century, but personally, I don't accept that possibilities I can't conceive of (yet) can't be true, or that things which seem intuitively true to me can't be incorrect.) There are (at least logically) problems with "it is true there is causation, and it is itself sufficient justification." It's circular, e.g. "faith is important, I know because of my faith." Very well-described, I think you might like reading some Kant if you can stomach his density - he very much exemplifies the concepts you're describing. I'd make sure you balance him out with some of the analytic philosophers like Russell or Ayer, or even Kant's original inspiration for his more advanced philosophy: Hume. (Also I have some other videos...) ;)
THUNK I have to apologize in advance (and retroactively) for overlong, confusing elaborations. There are probably different topics wrapped up in there, specifically regarding the non-observability statement of causation vs. the induction problem. One can probably make a good case against the former without the latter being any less true. (I don't make any claims against it.) My circular argument was probably a problem of expression. I think my argument is essentially very banal. It is saying that one can regard a cause and effect event as a unity, and that assuming one thing happening, need not be more problematic than assuming the whole cause and effect. In this sense, saying you can't see a cause and effect would be similar to saying "You can't see movement". If you see movement, you can probably think of an effect of this movement trivially as part of it. (And in the context you don't need to justify the "movement" in terms of a theory of relativity.) Or if you drop a thing, you don't need a theory of gravity, nor do you need additional observation to justify your observation of the causes of the event implied in the "dropping" i.e. the "letting go" (even if the thing would then behave strangely due to further causes). The "cause and effect" observation is then merely a linguistic expression, similar to how we would regard Newton's theory nowadays as merely an approximate quantification but not ultimate explanation. Basically I'm wondering whether one strictly has to follow Hume on the cause and effect non-observability statement, and I think it seems itself to be merely a linguistic problem (rather than a scientific one). (The induction argument itself being patently correct.) At least there seem to be different interpretations of it. That is, observation is bound to a specific observation, and even if you would correctly interpret it, what you see is still just this event, and you don't see the "rule" itself (for all future conditions or as an ultimate reality), although what you see can still be correctly explained and indeed "seen" this way (the same as its components). There must be no missing part or requirement for it (only in cases that give less direct access than in the case of "movement", or on a higher theoretical level). Another way of putting it is to regard various causes and effects as artificially segmented, rather than artificially put together (even if they appear in different objects to us, that have independent existence outside of the event). But these are largely just rhetorical pleas of mine, barring a better, more precise explanation.
I think Hume would argue that you can't see movement, either. You see a thing. You see a thing in another place. You see a thing in still another place. Your brain assembles these into a continuous sense of a single thing "moving" from place A to B, but that is a complex concept which is easily confounded. (You see "movement" here, is there any? i.mobofree.com/?u=http%3A%2F%2Ffarm2.static.flickr.com%2F1309%2F814941854_3b89ad9e74.jpg&w=600&h=1500 ) Whether you want to characterize that as too fundamental an assumption to any understanding of the universe whatsoever is your call - many would agree with you. There's also games that can be played with the imperfection of certain other labels (like 'cause'), but I think you can find rigorous enough definitions for most of them. The idea of a division between cause and effect being artificial only isn't familiar to me; I think no matter how one defines events (if at all) there's a pretty necessary temporal element.
THUNK Maybe "artificial" only in regards to postulating no visible connection. In most ordinary contexts at least, I am questioning what someone is still expecting to see that isn't there. This idea seems to come from an abstract concept of cause and effect (and is probably specifically for the purpose of such abstractions), rather than common experience. We are constantly acting and interacting, and seem to exist in a time bubble that's a little bigger than instantaneous but includes a bit of the past as part of a sequence. Every part of this can be broken up into various causes and effects, yet it is mostly self-evident to us. Therefore I am deliberately pointing out the banality of my point. I don't doubt that there often is some virtual sensory experience or assumption as in regard to the force, for example, but these tend to be part of our direct "toolkit" to comprehend such phenomena, even when they are not 100% correctly applied or subjective in a sense. So it would still be "natural" in a way to see and comprehend such phenomena, rather than only an abstract rule. Imagine yourself driving a car without some sense of the force you're controlling. (This paragraph is probably the most imprecise.) I haven't read Hume's theoretical philosophy but I wonder if visual illusions and mental processing are exactly the point he was conveying. If so, I would probably object much less. But then it would probably be more commonly taken up (I may be wrong, of course). As I understand the point, it is about the contrast to the "rule" or abstract conjecture, and finding it actually realized in the world (in any context). That is, apparently the idea of cause and effect is not actually mirrored in any way in "naive" human experience (which I doubt). I don't know whether I need to characterize the assumption as fundamental (as in final). From the way I am looking at it, it is strictly limited to practical, human purposes, and may therefore be naive in a higher scientific sense (and sometimes too imprecise even for practical purposes). I am merely saying that as far as causes and effects seem to concern us (whatever is the ultimate or universal scientific judgment on this), we are capable of having relatively basic and (ok) fundamental access to it. Think of what a psychologist (or neurologist) or even a biologist (with behavioral training) would say on this issue. It would probably be much simpler, and probably valid. Once more I have to apologize, but perhaps the issue is a little clearer now.
Be aware that people love their institutions, and even in academia people often make the mistake of believing the given authorities on any given subject could not possibly be wrong? That would be unthinkable.
your using the presupposition that "Nataral" & "Supernatual" can't in some instances both be the case, or even that they can't be the same ( which could depend on the "Word Game"of "Definition")
Say I hide 1000 Coins in a room. You know that within the realm of possibilities the coins could be characterized as being......[ALL BIASED, MOSTLY BIASED, EQUALLY SPLIT BETWEEN FAIR AND BIASED, MOSTLY FAIR, ALL FAIR coins]. Now say you find one of the coins. Then you flip the coin and get heads 200 times in a row. It would reasonable to conclude that the set of die which I hid, were far more likely to have been EITHER: ALL BIASED, MOSTLY BIASED, EQUALLY SPLIT BETWEEN FAIR AND BIAS, or MOSTLY FAIR (4/5 chance) then to say that they were ALL FAIR. Because if they were all fair , then the event of completely deviating from expected 50/50 chance, on 200 coin flips, would be extremely low. So because it's far more probable that the coins were NOT ALL FAIR (4/5 chance), then it is also more probable that you found a biased coin , and that the next flip of the coin, will yield another heads.
There is scarce agreement about what the word "science" means, there are many different definitions. Science is often seen as the converse of religion, but there is no agreed upon definition for the word "religion" ether. It is often stated that science is knowledge obtained through experimentally verified fact, that science unlike religion does not subscribe to fantasy, but this is incomplete because since there are an infinite number of possible experiments they must be thinned out through use of imagination to arrive at theories, thus fantasy does play a role in science as further evidenced by the role science fiction has played in advancing scientific curiosity. Its often stated that science is purely objective unlike religion, but this is not supported by the historic evidence since by a great many accounts the people who create scientific theories and the people who conduct experiments based on those theories are usually very emotionally involved in the process. Another claim is that scientific institutions ensure that science is free of untested dogma but an examination of the history of science refutes this assumption, most of the scientific luminaries now celebrated by institutions were once outsiders shunned by the scientific institutions of the time for breaching ineffectual dogmas, these innovators include people like Newton and Einstein, both poor students outside the mainstream. The efforts of obstinately individualistic thinkers are essential to scientific advancement while conformity and consensus are often required by scientific institutions for political reasons such as maintaining funding and expanding influence. I would define religion as the faith that one has arrived at the absolute truth, each religion supplies a different methodology for verifying its version of the absolute truth but only science provides a methodology based on the idea that faith in theory should be verified by evidence gathered by observation of nature and experiment in the lab, in effect science is not the opposite of religion but rather is the most advanced form of religion, this is why faith in science world wide is very high, in real terms it may now be the dominant religion. By this definition knowledge which is purely theoretical and so can not be observed in nature or verified through experiments in the lab is not advanced scientific knowledge at all but instead is ordinary religion. Most religions use a methodology that says faith is justified by personal conviction and emotion, that authority is justified by consensus and conformity, that institutional influence and wealth is evidence of validity, so the degree to which scientific institutions and individuals act in this way is the degree to which science has fallen into the less evolved pattern of ordinary religion, and this unfortunately includes quite a lot of what is considered "science" when institutions such as corporations and government agencies can hire and fire scientists to advance their agendas it is corrupting of science. When educational institutions control the future employment prospects of graduates to satisfy their own agendas its corrupting of science. When big profits enter any institution science is often forfeit. I do not see religion as the opposite of science since both religion and science serve to address faith. Rather I see politics as the opposite of science since politics is all about group dynamics first and individual integrity last, with science the individual is free to theorize, observe and experiment regardless of group dynamics , with politics every action must be guarded with regard to how the group might react. To the extent religion is political it is not scientific, to the extent religion is free of politics it has the chance to move toward becoming science Science is connected to political freedom, it is the effect political freedom has on the religious impulse, without political freedom science withers, where there is political oppression there is religious oppression and little scientific impulse.
Popper's responses to the PoI & the philosophy of science will be covered in a future video, but I don't think he's the final word on it or its relationship to science, despite contributing a great deal to the conversation!
The Problem of Induction is not much of a problem so far as I can see. The problem is a desire for 100% certainty. 100% certainty is the domain of fools - I am comfortable with reasonable (and in the case of the Higgs, near absolute) certainty. So long as that certainty is reached through reason.
This is a common misunderstanding of Hume's argument. He was not just pointing out that we cannot be 100% certain of empirical or matter-of-fact claims (that should be obvious), he was arguing that we have no rational grounds at all for believing claims based on inductive reasoning.
You keep going on thinking "cool thoughts", we'll just keep going on actually discovering things and inventing things to move you around, feed you, clothe you, house you, treat your illnesses, and keep you alive...you keep going on doing, well, nothing.
right, but don't call it knowledge...Because we don't know a priori what that drug to cure illness will do to the body. You have faith, belief, but not knowledge. Maybe the car science creates will get me around...maybe it won't.
There are a few different senses of that question. If you're asking if there's an objective reality, I'd say yes. If you're asking if there's a way to access it reliably, I'd also say yes. If you're asking if there's any way for humans to be 100% objective in their evaluation of that reality, I'd say no.
THUNK But then, of course, you have to define "reliably." And the whole thing keeps spiraling downward, constantly having to define smaller aspects of the situation until you suddenly look up and find yourself in the midst of a philosophical debate. :)
It's not a problem. If you see the same thing happening 100% of the time, then you should 100% believe it's going to happen the next time because 100% of the evidence is toward that conclusion. If you don't see a god 100% of the times you tested for one, you should assume for 100% of purposes that there is no god.
If I have a bag with 100 balls in it, and you randomly draw out 10 balls and they are all blue, should you conclude that it is 100% certain that the next ball you draw out will be blue?
@@9Ballr Look at it the other way around. If i draw ten balls out of a bag and they're all blue, 100% of available evidence supports that conclusion and no evidence supports any other conclusion. If 100% of Available evidence isn't good enough to call knowledge (justified belief), nothing can be. If 100% of the time that evidence for the existence of god is presented, it fails, there's no reason to believe in god, even if there is a god.
@@havenbastion That's strict a probability problem, and the answer is no. Even if you have drawn 99 of the 100 balls out of the bag and they are all blue, the mathematical probability that the 100th ball is blue is still less than 100%. You move from talk of 100% certainty to talk of what counts as knowledge. Most philosophers who talk about knowledge don't think that knowledge requires that your belief be 100% certain, so even if a belief is 100% certain that would not mean that it would not count as knowledge. There are also at least two senses of "certainty" to be clear about. One we might call epistemic certainty, which is about how confident you are or should be in your belief. The other we might call metaphysical certainty, which is about how likely it is that your belief is true. If the likelihood that your belief is true is less than 100% then your confidence in that belief should not be 100%. If all the balls I've drawn out of the bag are blue then the weight that evidence should carry is still relative to how big a sample we're talking about. So, if I have a bag with 100 balls in it and I pull out 1 ball at random and it is blue, then 100% of the evidence supports the conclusion that all the balls in the bag are blue, but it would be completely unreasonable to conclude that all of the balls in the bag are blue. If I randomly pull 99 balls out of the bag and they are all blue, then it is still not 100% certain that all of the balls in the bag are blue, but it is much, much more likely to be true than in the case where I drew 1 ball out. So what it is reasonable to believe is relative not only to the available evidence, but to the total sample.
@@9Ballr Epistemic certainty is all there is. When you mix two different levels of knowledge, that's where the problems begin. For All intents and purposes, the weight of Available evidence must be sufficient.
@@havenbastion Sorry, you didn't address any of the specific examples I gave, and you're speaking in such vague generalities that I think it's pointless to continue.
I think this is a brilliant explanation of a very important fact that changed my life a few years back: We're so very limited as human beings. We think we "know" so much, but really we are making assumptions daily. Yes, those assumptions work 99.9999999 (to an extreme number of places) percent of the time. It doesn't change the fact that "almost entirely completely forever always always always" includes the word "almost." And for me that's vital to remember, because it allows me to judge others a little less quickly and hopefully with a lot more compassion. We're all this way.
I think when we start allowing the pride of personal assumption to become 100% cold hard truth we begin to close ourselves off to possibilities. Throughout human history we've made assumptions- scientifically or otherwise- and time makes fools of us all. There's SO much we don't know that it's vital to keep our egos in check and always remember we are but humans, with such limited understanding on a tiny little ball of dust that doesn't even register on an image of the entire universe (not to mention potential other dimensions or timelines which are more likely to exist than NOT exist).
Thanks for a great video, it's refreshing to see people steeped in science recognize that it is not some almighty, perfect construct, and like the humans who invented it as a means of explaining the world around us, it has plenty of flaws. When we know this fact, we can live more freely, understanding that maybe- just maybe- none of us have any idea what we're talking about. So maybe we should fight with each other a little less, and care about each other a little more. :)
I'm all for fighting less and caring more! But to be fair, there are an awful lot of people who believe an awful lot of stuff that's not just not-perfect, but demonstrably antagonistic toward a practical understanding of the universe.
Humility is great, but even compared with the extraordinarily vastness of the universe, some views are more frequently right than others. ;)
THUNK At least, "right" in our own minds. ;)
I think in the end, it all comes down to intention. Something can be antagonistic for the right reasons and for the wrong ones. Unfortunately until we get science to hop to it on that "Intentions Meter" I keep bugging them about, it's probably best to err on the side of caring about everyone because they very well may believe they are doing the best thing possible. After all, is that not what we ourselves do? (Unless we're talking, say, the Jokers, who want to "just watch the world burn." And I'd imagine even those folks could use a hug or two.)
My dad likes to tell the story of how he was a Math major some ways into college and finally threw up his hands and said "forget it!" when one of his class started with "1 is assumed. Nothing in math can actually be proved. It is all assumption based on the fact that 1 exists." He became a computer science major instead. ^_^
As a philosophical empiricist I find myself arguing with almost every person on youtube (or a philosophy degree nowadays) where rationalism seems to be a religion. Half the video makers seem to be rationalists who insist experience is inferior to deduction and the other half seem to be just unaware what rationalism and empiricism even are and think rationalism is science.
As for your video it's nice to find a vid I completely agree with once in a while. Nice presentation too. The partial application of Munchhausen's Trilemma to Induction is good too. The Trilemma getting a vid to itself would be nice.
One discussion point though, couldn't we argue the adoption of induction by people is not an assumption (as you say) which implies the decision to use it is a rational one, but instead is instinctive or gut reaction? Especially as animals seem to have no problem adopting induction either.
I think the hard-line distinction between rationalism & empiricism gets complicated at a fine enough resolution, especially around the sciences - a lot of the advanced mathematics used in physics isn't strictly empirically based, it's more of a rationalist undertaking of deducing rules of logic & applying them to empirical data. It's probably easier to just say "trust repeatable data more than you trust your intuition about what the data ought to be" & leave it at that. ;)
I have no real problem with induction as a means of accessing truth in most contexts; we have some intuitive sense of when it's applicable & when it's not that's granted by experience. But many philosophers *hate* that it's such a practical success without any good reason for working as well as it does!
THUNK
It is true as you say, science fudges over the borders of rationalism and empiricism. I often suggest to scientists they stop calling them The Empirical Sciences and start calling them The Quite Empirical Sciences. However in the purely rational sciences the theories seem to change every few weeks, whereas strongly empirically supported sciences, theories last decades or even millennia. If you needed open heart surgery and a doctor said to you, I have a choice of two operations, the first has never been performed but the finest minds by logic and mathematics have deduced it would have the best possible outcome for you, alternatively there is an operation that has been performed successfully thousands of times around the world. Which would most people choose? All science is by no means equal.
Deducing the rules of ligic, I have had many discussions about that. Munchausen's Trilemma again. If you make the deduction to use multivalued logic to analyse a problem, which system of logic did you use to make this deduction with?
EMPRICISM IS DEAD!!! lol but seriously rationalism isn't all bad. It suffers the same faults as empiricism in the end
Right, we, like animals, rely on induction. David Hume called it 'animal faith.'
That reminds me of the time one of my friends mentioned CERN "discovering the Higgs Boson", then I corrected them by saying "they observed and peculiar energy signature that is consistent with the theory underlying the Higgs Boson to a statistically significant degree." It which point I was called a "nerd". But I guess in a David Hume sort of sense, that same thing could be leveled about chairs or the color orange just as well. "It looks like a chair, by all deninitions of chair that I have."
When you get down to it, every statement about anything has to have a massive chain of disclaimers & specifications ahead of it to be totally accurate. (Assuming my senses report some objective reality accurately, assuming I'm not dreaming, assuming that this sense data is not being interpreted inaccurately due to personal bias, for a certain definition of "it," for a certain definition of "is...")
We just generally cut to the chase & skip that bit for the sake of practicality, although how much awareness people have of those assumptions varies a great deal.
Still, if you go pointing them out to people who know them, they will call you "pedantic," for some definition of "you" and some definition of "pedantic..." ;)
www.smbc-comics.com/?id=3054
THUNK You're telling me, have you ever tried to prove that 1 + 1 = 2. Just about everything boils down to definitions.
Don't Russell and Whitehead take some serious lead up time to prove that in the Principia?
Alzeranox I haven't personally, no, but I have left it as an exercise for the reader.
***** It takes until halfway thru the second volume of the trilogy!
I would argue that the uncertainty inherent in the problem of induction as you laid it out is actually one of the greatest strengths of science. It imparts the idea that there is always more to learn and it's what allows for progress to be made. As soon as one is certain of his own correctness, he stops looking for answers. With that in mind, is not this uncertainty inextricably linked with the very heart of scientific inquiry?
The problem of induction is not about the fact that inductive reasoning is probabilistic rather than certain, but instead about the fact (if Hume was right) that we have no reason to believe inductive reasoning at all.
Inductive reasoning is a dead end, which makes science a dead end. The best the scientist has is hunches, guesses, probabilities, animal faith, instinct, belief...not certainty.
yup. this tends to be a problem with epistemology in general. the nuts and bolts of it is that we don't actually know if any of our perceptions are true or not. We know that we exist in some fashion, but concretion beyond that point is impossible. I just choose to be an empiricist out of pure practicality, if not necessity. It's impractical to assume anything other than that you can at least somewhat trust your perceptions, because our existences as we know them are defined by what we perceive.
This channel is really neat, thanks for putting up so many interesting videos! A qualification: I agree, science is the best thing we've got to know things about the world, there's pretty much nothing like it. I'm not a rationalist nor do I think things such as intuition are very reliable outside of extremely narrow and idealised contexts. We agree completely on the value of methological naturalism and the massive and obvious benefits it has and that it tracks towards the truth of things better than anything else people have tried.
One thing on the problem of induction that I think was only lightly touched upon in the video is that the problem isn't solely attacking the notion of absolute or 'foundational' certainty in the objectivity of scientific conclusions. Absolute certainty definitely isn't something you can arrive at through doing science.
The problem also points out that there is no fundamental, logical connection between events we observe occuring in any sense AT ALL. As you state in the video, correlation is the only thing we can empirically observe and the cause comes by inductive inference.
With this in mind even our notion of empirical probability is ITSELF based on the connection we infer from the past to the future, that things which happened in the past must happen in the same kind of way in the future: It's just an inductive inference like any other (and as you pointed out a working assumption). Every way of attempting to justify the assumptions underpinning our notions of causality themselves become subject to an application of the same line of questioning ad nauseum.
The sense I get from this problem isn't so much "nothing is certain" as "nothing is certain AND not even degrees of certainty can be justified empirically". It's like an acid that erodes through the idea of induction itself. It's not just that we can't be certain everything that's happened in our universe so far is a series of unconnected events, it's that we have no justified way to argue that the alternative is any more likely at all.
I see similarities to the Is-Ought problem and the problems with Foundationalism in epistemology. There's perhaps also something to be said for the way in which problems of this nature have within their premises a strict separation of one concept or object from another. Future from past, cause from effect, object A from object B, observation event in your mind from inference event in your mind (this last one is a particularly bold assumption about how the mind and observation function).
That brings to mind Zeno's paradoxes which seem compelling on the surface but pretty silly with our modern understanding of how the world works. It also reminds me of the Analytic-Synthetic distinction in how it makes very clear-cut, incisive divisions logically and empirically.
Maybe the tools turned towards resolving those other problems can be applied to the problem of induction somehow. The one damning thing about problems of this nature is that to accept them wholly would entail accepting a lot more unpalatable conclusions, perhaps to the point of extreme solipsistic skepticism. If I can't trust the future or past, then I can't trust the existence of inductive inference to unseen entities like electrons or oxygen atoms.. then if I can't accept an inference to something unobservable then I also can't infer the currently-unobserved-but-observable-in-principle. I can't justify that the world doesn't just pop out of existence when I'm not attending to it, including the majority of my own body and my own past or future.
Maybe arguments against Cartesian solipsism work to a degree against the problem of induction as they both effectively lead to the the same sort of conclusions for very similar reasons. Maybe they're the same problem dressed up in different clothes.
Just thinking out loud. This problem hurts my brain.
The only issue I have with probabilism being concluded (which is the idea that we never achieve *infallible* truth, just *probable* truth) is that probability is dependent on the very concept of knowledge - how much knowledge we lack to make judgements on a particular situation. The reason we say there's a 50/50 chance that a coin will land either heads or tails is because we don't have the specific knowledge required to assert an infallible claim. However, upon learning that the head side is heavier and that aerodynamics favors landing it on its head, we are inclined to say that there's a slightly higher chance - perhaps 52/48 - of it landing on heads.
Now, the issue with probabilism as an epistemological basis is that since probability is dependent on knowledge, you can't have knowledge depend on probability as well... probability is not "out there" in the world, it cannot be a justification factor for achieving knowledge. Unless you consider them both necessary conditions for one another and then probability and knowledge become the same exact thing ¯\_(ツ)_/¯
I wanted to make a comment about half way through the video but I am really tired and forgot some of what I had wanted to say and can't make sense of it anymore so I am just going to say thank you for these videos, I love them. They tend to be very thought provoking and enlightening
Thank you! Hope you get some sleep & remember what you wanted to say! :)
Great explanation of induction. Basically every argument boils down to what is more probable of being objective truth. 👍
There's a lot of fantastic discussion going on here and I'm really sorry you've had to deal with some of the not-quite-as-fantastic discussion. It's not a particularly difficult concept to grasp, but it seems like some people are having trouble with it. Maybe this'll help.
Induction is a bottom-up method of reasoning where you start with observations, find patterns, and eventually formulate a theory. Deduction is a top-down method of reasoning where you start with a theory and perform experiments to test the theory's validity.
Dropping a coin from a few feet off the ground 100 times results in the coin hitting the ground every single time.
From these observations, we isolate patterns: the coin falls when dropped; he coin continues falling until it hits the ground.
We form a theory: when we drop the coin, it hits the ground.
But here lies the problem. The only way to conclusively prove this theory is by dropping the coin an infinite number of times, but since we can't do that (for at least a couple different reasons), there is no way to deductively prove this theory. You'll always be able to drop the coin one more time, so you can't prove the theory to be true even if for all intents and purposes it is.
As Josh said in the video, science is based on unprovable assumptions and therefore is not strictly objective. It's the best we've got, but 100% efficiency doesn't exist in anything.
I think the problem is that this argument sounds an awful lot like the argument which believers in absurd things disproved by science use to justify their beliefs: "You think science is so hot, but it's as arbitrary as anything else, therefore I don't have to believe in it because everything arbitrary is equally valid to believe."
Maybe they should watch the video before this one before deciding that that's what I must be saying. ;)
+THUNK That's exactly what I was thinking. It's really difficult to convey nuanced points of view over text, let alone with strangers on UA-cam comment threads.
By the way, I still have all of the THUNK Show stickers and I still haven't been able to find something holy enough to bestow the sacred sticker on. Will update.
A lot of the time it's the hubris of the scientist that gets in the way of rational discussion.
I enjoy how you said "it's a better contender than x", which seems to use induction. Since I could respond with "....up until now."
Great talk. Thanks for posting.
Concerning epistemology:
Knowledge is the belief that certain assumptions are substantiated. All knowledge, including facts, are based entirely upon presuppositions and assumptions, however substantiated they may be.
Except science isn't about accessing truth. It's about eliminating what isn't true. It's impossible to prove some pattern observed between two events will continue based on information gained related to said pattern, because you need infinite data points to do so. You can however disprove patterns with just one sufficient counter example. So the scientific method is a way to find counter examples. For any upholder of science, their greatest day will be one where everything they had ever known was debunked. It would mean science is progressing!
And progressing science doesn't get you any closer to "truth." When science progresses and we learn something, it only reveals more questions that we didn't know or have any basis to ask. The 'truth' that supposedly is at the end of all of this is unreachable. You would have to answer all of the infinite questions first. So I think it's insane to aspire to finding truth. The best we can hope for is the process to find patterns which are false.
@@SupLuiKir Isn't trying to eliminate what isn't true just using inductive reasoning too? A dead end?
Once David Hume drives along the country side with his wife. Suddenly they see a herd of sheep. Look! his wife say. They sheared all these sheep. No, no... Hume say. It's only the side which we can see is sheared.
This is one of my favorite jokes! I heard a slightly different version.
An engineer, a physicist, a mathematician, & a philosopher are riding a train to a conference in Edinburgh, when they see a black sheep standing on a grassy hill.
Engineer: Huh! The sheep in Scotland are black!
Physicist: Well, SOME sheep in Scotland are black.
Mathematician: Well, AT LEAST ONE sheep in Scotland is black.
Philosopher: Well, ON ONE SIDE.
Very interesting video!
There is a myriad of assumptions any ideology has to take in order to get any semblance out of the world. My biggest question has always been, "how many of these are just _human_ assumptions and don't apply to the universe at all?"
"Necessary"
1:18 ;) perfectionist?
4:35 which is really a good thing. I've heard Tom Campbell, a former NASA physicist talk about how at some level, everything is subjective. Thanks for the upload!
This also extends to those that claim they have an objective morality. You are a subjective human and you've subjectively chosen a set of morals that you think are objective.
I quite like Nietzsche's snarky versions of this statement in "On the Prejudices of Philosophers."
>They all pose as though their real opinions had been discovered and attained through the self-evolving of a cold, pure, divinely indifferent dialectic (in contrast to all sorts of mystics, who, fairer and foolisher, talk of "inspiration"), whereas, in fact, a prejudiced proposition, idea, or "suggestion," which is generally their heart's desire abstracted and refined, is defended by them with arguments sought out after the event.
Im a philosophy major so I would say necessary, but I am sure you know how much baggage that word carries for us.
I don't know, I kinda feel like this makes the assumption that science is about finding "truth"--as opposed to simply learning how to make useful predictions about the world around us, through observation and experimentation. Science is NOT about finding ultimate truth, it's about becoming progressively "less wrong" by ruling out certain possibilities, while never having absolute certainty. With science you always (and only) end up with the "best existing explanation given all the available information."
The billiard ball example allows us to formulate a useful, real-life prediction that the other ball will move according to a certain equation, when hit. If, one day, it moves in a way that is opposite of what is expected, then one could devise another experiment (or series of experiments) in an attempt to find the variables that reliably produce this new, opposite result. Assuming the new experiments are successful, this would be an example of becoming progressively "less wrong" where we now can understand what factors most likely produced the unexpected and rare result.
Lather, rinse, repeat. And throw away the notion of perfect certainty.
You're perfectly certain that perfect certainty doesn't exist? 😉
And here I thought you found a new way to charge my electric toothbrush.
Maybe one can look at causation as an abstraction of a single reason with separate parts. Okay that sounds like a tautology. But the point is that the only reason why one might think there is a gap is because of separate parts. But if one were to consider there being other results, one doesn't say there is no causation, one says there are two causations instead. So one can't really say there is no reason for causation being true. Because in a way one is thinking of only one thing, rather than trying to close the gap between two. If one considers something completely random happening, that happenstance would be its own reason. So one can't say that there will always be a kind of causation, but one can generalize specific ideal conditions. That sounds like a truism, but it seems the problem has shifted. Because one can say "it is true there is causation, and it is itself sufficient justification". One still can't over-generalize a kind of causation or conditions being the same. The latter implication is the same as before. But the problem of the reason doesn't exist. So it seems to me...
I suspect it is probably what it means anyway, if there isn't some fault in what I said. Or at least it could be the usual application of the problem. Another suspicion of mine is that the difference between what I said and the actual argument is that I'm not talking in pure logics, but am already taking a specific, observed world for granted (no matter how justified it may be, i.e. part of "what actually happened", even independent of an abstract "theory", as in what your brain can take care for you to be able to act and react etc.).
The philosophical concept you're very eloquently describing (or at least pointing at obliquely) is called "a priori," basically "the things we can know intuitively to be true without needing to look at the world." If "one can't think of no causation" is an indication that no-causation can't exist, that is an a priori truth. (The lines get blurry between a priori & a posteriori truths around the mid-20th century, but personally, I don't accept that possibilities I can't conceive of (yet) can't be true, or that things which seem intuitively true to me can't be incorrect.)
There are (at least logically) problems with "it is true there is causation, and it is itself sufficient justification." It's circular, e.g. "faith is important, I know because of my faith."
Very well-described, I think you might like reading some Kant if you can stomach his density - he very much exemplifies the concepts you're describing. I'd make sure you balance him out with some of the analytic philosophers like Russell or Ayer, or even Kant's original inspiration for his more advanced philosophy: Hume. (Also I have some other videos...) ;)
THUNK I have to apologize in advance (and retroactively) for overlong, confusing elaborations.
There are probably different topics wrapped up in there, specifically regarding the non-observability statement of causation vs. the induction problem. One can probably make a good case against the former without the latter being any less true. (I don't make any claims against it.)
My circular argument was probably a problem of expression. I think my argument is essentially very banal. It is saying that one can regard a cause and effect event as a unity, and that assuming one thing happening, need not be more problematic than assuming the whole cause and effect.
In this sense, saying you can't see a cause and effect would be similar to saying "You can't see movement". If you see movement, you can probably think of an effect of this movement trivially as part of it. (And in the context you don't need to justify the "movement" in terms of a theory of relativity.)
Or if you drop a thing, you don't need a theory of gravity, nor do you need additional observation to justify your observation of the causes of the event implied in the "dropping" i.e. the "letting go" (even if the thing would then behave strangely due to further causes).
The "cause and effect" observation is then merely a linguistic expression, similar to how we would regard Newton's theory nowadays as merely an approximate quantification but not ultimate explanation.
Basically I'm wondering whether one strictly has to follow Hume on the cause and effect non-observability statement, and I think it seems itself to be merely a linguistic problem (rather than a scientific one). (The induction argument itself being patently correct.)
At least there seem to be different interpretations of it. That is, observation is bound to a specific observation, and even if you would correctly interpret it, what you see is still just this event, and you don't see the "rule" itself (for all future conditions or as an ultimate reality), although what you see can still be correctly explained and indeed "seen" this way (the same as its components). There must be no missing part or requirement for it (only in cases that give less direct access than in the case of "movement", or on a higher theoretical level).
Another way of putting it is to regard various causes and effects as artificially segmented, rather than artificially put together (even if they appear in different objects to us, that have independent existence outside of the event). But these are largely just rhetorical pleas of mine, barring a better, more precise explanation.
I think Hume would argue that you can't see movement, either. You see a thing. You see a thing in another place. You see a thing in still another place. Your brain assembles these into a continuous sense of a single thing "moving" from place A to B, but that is a complex concept which is easily confounded. (You see "movement" here, is there any? i.mobofree.com/?u=http%3A%2F%2Ffarm2.static.flickr.com%2F1309%2F814941854_3b89ad9e74.jpg&w=600&h=1500 ) Whether you want to characterize that as too fundamental an assumption to any understanding of the universe whatsoever is your call - many would agree with you.
There's also games that can be played with the imperfection of certain other labels (like 'cause'), but I think you can find rigorous enough definitions for most of them. The idea of a division between cause and effect being artificial only isn't familiar to me; I think no matter how one defines events (if at all) there's a pretty necessary temporal element.
THUNK
Maybe "artificial" only in regards to postulating no visible connection. In most ordinary contexts at least, I am questioning what someone is still expecting to see that isn't there. This idea seems to come from an abstract concept of cause and effect (and is probably specifically for the purpose of such abstractions), rather than common experience. We are constantly acting and interacting, and seem to exist in a time bubble that's a little bigger than instantaneous but includes a bit of the past as part of a sequence. Every part of this can be broken up into various causes and effects, yet it is mostly self-evident to us. Therefore I am deliberately pointing out the banality of my point.
I don't doubt that there often is some virtual sensory experience or assumption as in regard to the force, for example, but these tend to be part of our direct "toolkit" to comprehend such phenomena, even when they are not 100% correctly applied or subjective in a sense. So it would still be "natural" in a way to see and comprehend such phenomena, rather than only an abstract rule. Imagine yourself driving a car without some sense of the force you're controlling. (This paragraph is probably the most imprecise.)
I haven't read Hume's theoretical philosophy but I wonder if visual illusions and mental processing are exactly the point he was conveying. If so, I would probably object much less. But then it would probably be more commonly taken up (I may be wrong, of course). As I understand the point, it is about the contrast to the "rule" or abstract conjecture, and finding it actually realized in the world (in any context). That is, apparently the idea of cause and effect is not actually mirrored in any way in "naive" human experience (which I doubt).
I don't know whether I need to characterize the assumption as fundamental (as in final). From the way I am looking at it, it is strictly limited to practical, human purposes, and may therefore be naive in a higher scientific sense (and sometimes too imprecise even for practical purposes). I am merely saying that as far as causes and effects seem to concern us (whatever is the ultimate or universal scientific judgment on this), we are capable of having relatively basic and (ok) fundamental access to it.
Think of what a psychologist (or neurologist) or even a biologist (with behavioral training) would say on this issue. It would probably be much simpler, and probably valid.
Once more I have to apologize, but perhaps the issue is a little clearer now.
Be aware that people love their institutions, and even in academia people often make the mistake of believing the given authorities on any given subject could not possibly be wrong? That would be unthinkable.
your using the presupposition that "Nataral" & "Supernatual" can't in some instances both be the case, or even that they can't be the same ( which could depend on the "Word Game"of "Definition")
Say I hide 1000 Coins in a room. You know that within the realm of possibilities the coins could be characterized as being......[ALL BIASED, MOSTLY BIASED, EQUALLY SPLIT BETWEEN FAIR AND BIASED, MOSTLY FAIR, ALL FAIR coins]. Now say you find one of the coins. Then you flip the coin and get heads 200 times in a row. It would reasonable to conclude that the set of die which I hid, were far more likely to have been EITHER: ALL BIASED, MOSTLY BIASED, EQUALLY SPLIT BETWEEN FAIR AND BIAS, or MOSTLY FAIR (4/5 chance) then to say that they were ALL FAIR. Because if they were all fair , then the event of completely deviating from expected 50/50 chance, on 200 coin flips, would be extremely low. So because it's far more probable that the coins were NOT ALL FAIR (4/5 chance), then it is also more probable that you found a biased coin , and that the next flip of the coin, will yield another heads.
There is scarce agreement about what the word "science" means, there are many different definitions. Science is often seen as the converse of religion, but there is no agreed upon definition for the word "religion" ether.
It is often stated that science is knowledge obtained through experimentally verified fact, that science unlike religion does not subscribe to fantasy, but this is incomplete because since there are an infinite number of possible experiments they must be thinned out through use of imagination to arrive at theories, thus fantasy does play a role in science as further evidenced by the role science fiction has played in advancing scientific curiosity.
Its often stated that science is purely objective unlike religion, but this is not supported by the historic evidence since by a great many accounts the people who create scientific theories and the people who conduct experiments based on those theories are usually very emotionally involved in the process.
Another claim is that scientific institutions ensure that science is free of untested dogma but an examination of the history of science refutes this assumption, most of the scientific luminaries now celebrated by institutions were once outsiders shunned by the scientific institutions of the time for breaching ineffectual dogmas, these innovators include people like Newton and Einstein, both poor students outside the mainstream. The efforts of obstinately individualistic thinkers are essential to scientific advancement while conformity and consensus are often required by scientific institutions for political reasons such as maintaining funding and expanding influence.
I would define religion as the faith that one has arrived at the absolute truth, each religion supplies a different methodology for verifying its version of the absolute truth but only science provides a methodology based on the idea that faith in theory should be verified by evidence gathered by observation of nature and experiment in the lab, in effect science is not the opposite of religion but rather is the most advanced form of religion, this is why faith in science world wide is very high, in real terms it may now be the dominant religion.
By this definition knowledge which is purely theoretical and so can not be observed in nature or verified through experiments in the lab is not advanced scientific knowledge at all but instead is ordinary religion.
Most religions use a methodology that says faith is justified by personal conviction and emotion, that authority is justified by consensus and conformity, that institutional influence and wealth is evidence of validity, so the degree to which scientific institutions and individuals act in this way is the degree to which science has fallen into the less evolved pattern of ordinary religion, and this unfortunately includes quite a lot of what is considered "science"
when institutions such as corporations and government agencies can hire and fire scientists to advance their agendas it is corrupting of science. When educational institutions control the future employment prospects of graduates to satisfy their own agendas its corrupting of science. When big profits enter any institution science is often forfeit.
I do not see religion as the opposite of science since both religion and science serve to address faith.
Rather I see politics as the opposite of science since politics is all about group dynamics first and individual integrity last, with science the individual is free to theorize, observe and experiment regardless of group dynamics , with politics every action must be guarded with regard to how the group might react. To the extent religion is political it is not scientific, to the extent religion is free of politics it has the chance to move toward becoming science
Science is connected to political freedom, it is the effect political freedom has on the religious impulse, without political freedom science withers, where there is political oppression there is religious oppression and little scientific impulse.
I THunk Your Blizzard Blew over Brah!
For anyone who has ever heard about Karl Popper the Problem of Induction comes as old news, doesn't it?
Popper's responses to the PoI & the philosophy of science will be covered in a future video, but I don't think he's the final word on it or its relationship to science, despite contributing a great deal to the conversation!
The Problem of Induction is not much of a problem so far as I can see. The problem is a desire for 100% certainty. 100% certainty is the domain of fools - I am comfortable with reasonable (and in the case of the Higgs, near absolute) certainty. So long as that certainty is reached through reason.
comfortable with 'reasonable' when injecting something into your arm?
This is a common misunderstanding of Hume's argument. He was not just pointing out that we cannot be 100% certain of empirical or matter-of-fact claims (that should be obvious), he was arguing that we have no rational grounds at all for believing claims based on inductive reasoning.
What is image at 0:30 ?
+shodanxx The source is here: images5.fanpop.com/image/photos/26200000/Stu-looking-out-the-window-stuart-sutcliffe-26263115-500-688.jpg
Check out UA-cam user SisyphusRedeemed if you're interested in Phil of Science
You keep going on thinking "cool thoughts", we'll just keep going on actually discovering things and inventing things to move you around, feed you, clothe you, house you, treat your illnesses, and keep you alive...you keep going on doing, well, nothing.
right, but don't call it knowledge...Because we don't know a priori what that drug to cure illness will do to the body. You have faith, belief, but not knowledge. Maybe the car science creates will get me around...maybe it won't.
Pragmatism
So is nothing 100% objective?
There are a few different senses of that question. If you're asking if there's an objective reality, I'd say yes. If you're asking if there's a way to access it reliably, I'd also say yes. If you're asking if there's any way for humans to be 100% objective in their evaluation of that reality, I'd say no.
THUNK But then, of course, you have to define "reliably." And the whole thing keeps spiraling downward, constantly having to define smaller aspects of the situation until you suddenly look up and find yourself in the midst of a philosophical debate. :)
No one knows if an objective world exists.
Please do not show this video to a creationist. I don't need to hear this "it's just a theory "shit again
It's not a problem. If you see the same thing happening 100% of the time, then you should 100% believe it's going to happen the next time because 100% of the evidence is toward that conclusion. If you don't see a god 100% of the times you tested for one, you should assume for 100% of purposes that there is no god.
If I have a bag with 100 balls in it, and you randomly draw out 10 balls and they are all blue, should you conclude that it is 100% certain that the next ball you draw out will be blue?
@@9Ballr Look at it the other way around. If i draw ten balls out of a bag and they're all blue, 100% of available evidence supports that conclusion and no evidence supports any other conclusion. If 100% of Available evidence isn't good enough to call knowledge (justified belief), nothing can be.
If 100% of the time that evidence for the existence of god is presented, it fails, there's no reason to believe in god, even if there is a god.
@@havenbastion That's strict a probability problem, and the answer is no. Even if you have drawn 99 of the 100 balls out of the bag and they are all blue, the mathematical probability that the 100th ball is blue is still less than 100%.
You move from talk of 100% certainty to talk of what counts as knowledge. Most philosophers who talk about knowledge don't think that knowledge requires that your belief be 100% certain, so even if a belief is 100% certain that would not mean that it would not count as knowledge.
There are also at least two senses of "certainty" to be clear about. One we might call epistemic certainty, which is about how confident you are or should be in your belief. The other we might call metaphysical certainty, which is about how likely it is that your belief is true. If the likelihood that your belief is true is less than 100% then your confidence in that belief should not be 100%.
If all the balls I've drawn out of the bag are blue then the weight that evidence should carry is still relative to how big a sample we're talking about. So, if I have a bag with 100 balls in it and I pull out 1 ball at random and it is blue, then 100% of the evidence supports the conclusion that all the balls in the bag are blue, but it would be completely unreasonable to conclude that all of the balls in the bag are blue.
If I randomly pull 99 balls out of the bag and they are all blue, then it is still not 100% certain that all of the balls in the bag are blue, but it is much, much more likely to be true than in the case where I drew 1 ball out. So what it is reasonable to believe is relative not only to the available evidence, but to the total sample.
@@9Ballr Epistemic certainty is all there is. When you mix two different levels of knowledge, that's where the problems begin. For All intents and purposes, the weight of Available evidence must be sufficient.
@@havenbastion Sorry, you didn't address any of the specific examples I gave, and you're speaking in such vague generalities that I think it's pointless to continue.