Induction Without Rules

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 25

  • @yqafree
    @yqafree 3 роки тому +15

    You're definitely one of the smartest philosophy youtubers. No flattering

  • @q2santos
    @q2santos 11 місяців тому +1

    It would interesting to have a similar video on induction by measurement omission as proposed by Ayn Rand in her epistemology.

  • @joske7804
    @joske7804 3 роки тому +2

    Definitely one of the best philosophy youtubers i know personally.

  • @pascalbercker7487
    @pascalbercker7487 3 роки тому +3

    P(HIV given 2 positive HIV tests) = 71% and P(HIV given 3 positive HIV tests) = 99.2%. I'm especially fond of using easy-to-use Bayesian Networking software like Netica (or also AgenaRisk - from Professor Fenton somewhere in the UK - Queen Mary University in London I think) to quickly calculate such things. I find it easier to understand when I can actually model the question at hand. Love your channel.

  • @anitkythera4125
    @anitkythera4125 3 роки тому +4

    My dad is a philosophy professor and we just had a great conversation about this video! He was looking at the intersection between this and Piaget's theories on how our minds develop and evolve. His research area is on Piaget's conception of evolution. Cheers!

  • @patrickwilson1804
    @patrickwilson1804 3 роки тому

    This is probably one of my favorite kane b videos.

  • @kleezer1
    @kleezer1 3 роки тому +1

    The fact that this channel hasn't gone viral disgusts me

  • @yourfutureself3392
    @yourfutureself3392 2 роки тому +2

    Interesting theory and good video

  • @dragonsword343
    @dragonsword343 3 роки тому

    Hey, thanks for an insightful discussion on induction. I do have one objection, regarding your illustration of "tracing the history of the first induction". Take beer, for example (and later on champagne, genetics etc)- all of these discoveries & observations have been on accidental grounds. Even if, at some point, it could arguably *seem challenging to understand how the first set of inductive possibilities have reshaped human cognition, epistemic schemas and so on, I think that accidental discoveries are a good response to your illustration of our cogitaive acts only having some set of brute facts.

  • @iamFilos
    @iamFilos 3 роки тому

    Are you familiar with Popper's solution to the problem of Induction as developed in his "Realism and The Aim of Science"? If so, what are your thoughts on it?

    • @KaneB
      @KaneB  3 роки тому

      Is his position on induction in that text significantly different from the position he held in his earlier work?

    • @EdwardMariyaniSquire
      @EdwardMariyaniSquire 3 роки тому

      @@KaneB No. It's the same.

    • @humeanrgmnt7367
      @humeanrgmnt7367 3 роки тому

      Popper doesn't solve anything. Matters of fact cannot be proven only disproven. Justified belief is a dead end. The way science reasons is circular; it too is a dead end.

    • @DarrenMcStravick
      @DarrenMcStravick 3 роки тому +1

      @@humeanrgmnt7367 Matters of fact can be proven to an individual via norms of direct acquaintance and direct reference, my guy.

    • @frenchmarty7446
      @frenchmarty7446 2 роки тому

      @@DarrenMcStravick I assume he means natural regularities that persist beyond immediate experience.
      Though I guess you could raise doubt even about "matters of fact" of direct experience. How do I know I'm holding my cellphone right now and not just misinterpreting visual sensations? Well I make countless assumptions about how objects behave in the Universe. Presumably I can't truly "prove" them all. Or so goes the argument.

  • @grahamhenry9368
    @grahamhenry9368 2 роки тому

    Why is Solomonoff Induction not the dominant perspective? I have not encountered any alternatives that are even close to as persuasive as alternatives don’t seem to be considering what we have learned from modern information theory, algorithmic complexity theory, probability, or computer science.
    Furthermore, Solomonoff induction has a lot of formally proved properties and guarantees. It’s arguably the Platonic Ideal of rational beliefs

    • @davidfoley8546
      @davidfoley8546 2 роки тому +1

      Solomonoff induction is uncomputable, so supposing for a moment that the world can be represented as a TM, it still can't be the case that the actual inductive reasoning performed by humans is Solomonoff induction. It could be some approximation, but even so we are left with the question of what kind of approximation it is.

    • @grahamhenry9368
      @grahamhenry9368 2 роки тому +1

      @@davidfoley8546 Yes, I agree. My point is that Solomonoff Induction is like the Platonic Ideal of rationalism. A lot of things are uncomputable. PI, circles, real numbers, but we don’t really care because we can estimate them to sufficiently accurate degrees for our purposes. The closer we get to approximating Solomonoff Indiction the more rational we are. Furthermore, the philosophical perspectives that it provides on all sorts of questions I think are under appreciated. How do you deal with Solipsism? How do you deal with Theism? Brain in a vat? Simulation theory? Panpsychism? Solomonoff Induction is an entire framework that tells you in a formal way precisely how these questions should be approached from an information theory perspective.

    • @frenchmarty7446
      @frenchmarty7446 2 роки тому +1

      @@grahamhenry9368
      1.) Solomonoff Induction isn't comparable to a concept like Pi. Pi cannot be known to a specific set of digits, but it's bounds can be constrained and it can be procedurally understood to perfect accuracy.
      Solomonoff Induction is not computable in a different sense. It requires knowing in advance every possible model and their corresponding minimum message lengths. This isn't an issue of precision but of infinite parameters. It would be as if one particular digit of Pi (among infinite digits) was the true best hypothesis.
      2.) Approximating Solomonoff Induction (if that is even a meaningful statement) is not a necessary target for rationality. This is the age old problem of Platonic ideals in the first place. From "X would be ideal (in an alternate ideal world)" it does *not* follow that "approximately X is ideal (in our context)". Maybe it is, or maybe it isn't. That is something you actually have to prove.
      Case and point: Navier-Stokes equations for fluid flow. We know that all materials (including the particles that make up a fluid) necessarily obey General Relativity, but we don't waste time and energy approximating relativistic motions of particles when we model fluid flow; we have completely different models that are proven to work. We "approximate" Relativity *only* in the sense that the results approximately agree (i.e. are coherent with eachother). In fact the Navier-Stokes equations were known first.
      3.) Ecological rationality: humans don't follow fixed commandments of rationality, and when we're forced to we are horribly inefficient. We reason from a whole collection of context-dependent heuristics without needing to spend energy on the minimum message lengths of an infinite set of alternative hypotheses. "Improving" our rationality is only meaningful in relation to the equipment we actually have.
      4.) Common sense: where are all the discoveries derived via (approximate) Solomonoff Induction? Where are the Solomonoff AIs? Where are people even talking about Solomonoff Induction in actual practice?

    • @grahamhenry9368
      @grahamhenry9368 2 роки тому +1

      @@frenchmarty7446 Thanks for the reply, very well stated. I agree with your core points for the most part. You mention that the brain and other intelligent systems are highly dependent upon heuristics, which I certainly agree with, but we must ask what process are the heuristics estimating? What function are they optimizing?
      Solomonoff Induction is just an extension of Baye’s Theorem, which is at the heart of every machine learning algorithm in some abstract or isomorphic way. You have a model of the world, and as you make more observations of the world, you need some way to update your model to incorporate this new information. Furthermore, you also need some way to differentiate between competing models, so that you can favor “better” models, and you also need some way of efficiently searching model space. Solomonoff Induction is just a particular mathematical instantiation of this basic framework that is mathematically optimal in *some* regards.
      The Navier-Stokes equations are a great example, as are Newtons laws in general. Why do we favor these models? Because they have excellent predictive power, excellent explanatory power, and we can literally use them to compress a set of data points to a near minimal archive. This is the core idea from Solomonoff Induction, that a good model is the one which best compresses all your observations, showing a deep connection between data compression and intelligence. We also know that such a model will tend to be the most predictive of future observations as well. If you are observing the outputs of a formal system, the algorithm that will best compress those outputs is the algorithm that reduces them to the set of axioms of that formal system
      Solomonoff Induction favors all our best theories and models above any alternatives. Also, there are attempts to create AIs that use Solomonoff Induction more explicitly. AIXI by Marcus Hutter for example. Also, many of the folks running Googles Deep Mind are big proponents of Solomonoff Induction.
      I particularly like this paper by Deep Minds Shane Legg
      www.vetta.org/documents/legg-1996-solomonoff-induction.pdf

    • @frenchmarty7446
      @frenchmarty7446 2 роки тому +1

      @@grahamhenry9368
      I think I broadly agree with your point. Solomonoff Induction captures the core intuition of what good modeling looks like. In a sense it is at the very least a mathematical justification for those intuitions.
      In some sense, everything that yields useful knowledge about the world is approximating some ideal Solomonoff Induction. My point with the Navier-Stokes equations is that these approximations don't have to approximate the actual process. For the same computing power heuristics very often beat estimating the "true" model (for example: a Monte Carlo simulation of relativistic particle behavior) .
      Here's a better visual metaphor: imagine an optimization landscape (a plane where height represents performance and every other dimension represents some characteristic). For the sake of argument we know for certain that true Solomonoff Induction is the global maximum. However the landscape doesn't have to be monotonic. There can be several local maxima that easily beat close to Solomonoff Induction. Moving towards the global maximum only guarantees improvement if you eventually reach it; there's no guarantee of incremental improvement.
      It's *possible* that new practically useful local maxima will be discovered by exploring around the global maximum (Solomonoff Induction). It's not unwarranted to think so (you've got to look somewhere). I'm sympathetic to the search. But I think that one has to actually deliver the goods before adjusting one's views.

  • @sisyphus645
    @sisyphus645 3 роки тому

    3:10 they’re not mutually exclusive

  • @fanboy8026
    @fanboy8026 3 роки тому +2

    In future do a video about ontological arguments

  • @EnlightenedDrummer
    @EnlightenedDrummer 3 роки тому

    Can’t wait to watch