How to unify logic & arithmetic

Поділитися
Вставка
  • Опубліковано 14 тра 2024
  • Support us on Patreon: www.patreon.com/user?u=86649007
    #logic #excluded_middle
    I love it when different parts of mathematics are brought together into a single perspective. In this video, we take a fresh look at simple logical operators such as 'and', 'or', and implication. We extend these operators to numbers, and we interpret them in terms of ordering. Did you know that logical implication is really just "less than or equal to"?
    Here are some links to Eric Hehner's ideas if you want to dig deeper:
    www.cs.toronto.edu/~hehner/
    Hehner's website at the university of Toronto.
    www.cs.toronto.edu/~hehner/BAU...
    Article for the Mathematical Intelligencer. Good introduction to Hehner's unified system of logic and algebra.
    These are some resources that Eric Hehner himself suggested for you if you want to learn more:
    www.cs.utoronto.ca/~hehner/UA.pdf
    www.cs.utoronto.ca/~hehner/BAU...
    0:00 Introduction
    0:40 Binary logic
    4:55 Truth values are ordered
    6:41 Extending logic to arithmetic
    9:14 Differences between logic and arithmetic
    10:22 Functions and variables
    11:34 Sums and products
    12:29 Logical quantifiers
    14:00 Shorter notation
    15:19 Bunches and sets
    This video is published under a CC Attribution license
    ( creativecommons.org/licenses/... )

КОМЕНТАРІ • 397

  • @AllAnglesMath
    @AllAnglesMath  2 місяці тому +76

    Many people are asking about the difference between bunches and sets. They seem to be doing exactly the same thing, so why the distinction?
    As mentioned in the video, the main advantage of bunches is that you don't have to unpack, operate, and re-pack. You can operate directly on the bunch. This is more cumbersome for sets.
    A second argument that Eric Hehner himself makes, is that it's just conceptually cleaner to pull two different concepts apart. Bunches are about aggregation (bringing elements together). Sets are about packaging (treating the set of elements as an object with its own identity). Since these are two different things, it makes sense to model them separately.
    In programming languages, you also often see a distinction between a list of values and an "unpacked sequence" of values. This can often lead to much more elegant code.
    If you're still not convinced, have a look at section 11.0.2 of this PDF ( www.cs.toronto.edu/~hehner/aPToP/aPToP.pdf ) which fittingly begins with the question "Why bother with bunches?".

    • @APaleDot
      @APaleDot 2 місяці тому +10

      So because sets are about packaging, they allow you to express nested sets, whereas bunches cannot.
      Bunches can express 1, 2, 3, 4 but to express 1, 2, {3, 4} you need sets, right?
      Is a bunch of bunches just another flat bunch? (1, 2), (3, 4) = 1, 2, 3, 4?

    • @joshhilbert4059
      @joshhilbert4059 2 місяці тому

      The real benefits of bunches over sets is that we are already comfortable reasoning about pluralities without needing to package them into singular objects. See chapter 3 of www.logicmatters.net/resources/pdfs/SmithCat-I.pdf (Category Theory I: A gentle prologue by Peter Smith) for a great discussion on this, but to elaborate:
      In usual language of set theory the only basic formulas with truth values look like “A∈B”,
      for any sets A and B. In fact “A=B” is just a shorthand for the formula “∀C(C∈B⇔C∈A)∧∀D(A∈D⇔B∈D)” which just says that two sets are equal if they they can be replaced by each other in any formula involving ∈ without changing the truth value. We need such a relation to even begin to describe how sets behave, because it is the only thing that defines how sets relate. The problem with this is that we also want to define new sets that relate to certain sets in some interesting ways, which can lead to inconsistency if we are not careful. Indeed Russel’s paradox shows there can’t be a set “R={A|¬(A∈A)}” of all sets that are not an element of themselves, for we cannot consistently say whether or not “R∈R” is true or false. However there is a bunch (literally) of sets that do not contain themselves, such as {0,1} or {}. In fact every set in modern set theory does not contain itself due to the axiom of foundation. Another problem with sets is that functions “f:A→B” in set theory are defined as specific kinds of subsets “f={x∈A×B|φ(x)}” of the Cartesian product of the domain and codomain. Here “φ” is formula with free variable “x”, which is really after all a function from the bunch of all sets in A×B to the bunch of truth values ⊥,⊤. This isn’t technically circular, but it suggests that functions are better thought of not in terms of sets, but as the more fundamental notion of a deterministic association between each member of a domain bunch some member of a codomain bunch.

    • @EulyDerg
      @EulyDerg 2 місяці тому +5

      ​@@APaleDot looks like you just made a bunch out of three things: the number 1, the number 2, and the set {3, 4}.
      a bunch could have different kinds of things too, i would imagine.
      also, since all* operators distribute over the comma operator, i would imagine the comma operator does too.
      if bunches could nest like sets, that would make them a sort of container again, and not really meaningfully different from sets anymore.
      *all operators except the box packing and unpacking, since that would mean {{1,2,3}} = {{1},{2},{3}}, which isn't consistent with how sets work, and kinda defeats the point of these operators together being a kind of more pedantic set.

    • @EulyDerg
      @EulyDerg 2 місяці тому +3

      thought about it for a minute.
      when we remove parentheses in other contexts, like with addition, we were assuming associativity. (1 + 2) + 3 will always be the same result as 1 + (2 + 3). therefore, there's absolutely no ambiguity in saying 1 + 2 + 3.
      for other operations, like 2 ^ 3 ^ 4, there is a difference between (2 ^ 3) ^ 4 and 2 ^ (3 ^ 4), which is why some programming and typesetting languages will ask you to clarify this.
      i think it would be completely reasonable to say that a bunch bundled with a bunch should just give a bigger bunch.
      otherwise, what exactly are we talking about by the bunch (1, 2, 3, 4)? theres gotta be some order in which we perform the comma operators, and it's unavoidable that youre gonna be joining bunches with either other bunches: ((1, 2), (3, 4)), or bunches with a non-bunch thing: (1, (2, (3, 4))).
      with the latter case, distributivity can transform it into the former case as well: (1, (2, 3)) = ((1, 2), (1, 3)).

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +10

      ​@@APaleDotA bunch of bunches is indeed another bunch. Bunches always get "flattened" into a simple bunch.

  • @ichigo_nyanko
    @ichigo_nyanko 2 місяці тому +52

    the commutative comma got me, made me smile when I realised what was happening.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +8

      I had the same reaction when I saw this the first time. I think it's because the comma has become invisible to us, and now it suddenly gets a role to play.

    • @whig01
      @whig01 2 місяці тому

      @@AllAnglesMath I just had this idea that putting a dot above a multiplication symbol could represent non-commutative multiplication, as when the quantities are quaternions for instance. Since this maps to the semicolon for non-commutative lists.

    • @edomeindertsma6669
      @edomeindertsma6669 Місяць тому +1

      It's not visually symmetric though :P.

    • @santerisatama5409
      @santerisatama5409 Місяць тому

      @@AllAnglesMath Speaking of invisible, the unmarked whitespace is very interesting and has very important role to play. I was flabbergasted when I realized that concatenation is the mediant of whitespace, which under most general definition is a continuum of neither singular nor plural. Temporarily marking out the mediant relation, it looks like this: _I_ (and/or T). When I realized this, I immediately went: Ha! Dirac delta!!!
      With the unmarked distinction we can organize the Turing machine Tape into blanks, concatenation being concentrated continuity. The unmarked distinction between whitespace and concatenation is even more primitive than the Spencer-Brown's urdistinction marked-unmarked. And when marked characters are introduced, naturally the unmarked distinction becomes visible in the distinction between "com pound" and "compound" words.
      The fixation on LEM has kept the importance of generalizing mediants into primitive generative operations hidden for a long time. Nesting algorithms are at least as important as additive algorithms.

  • @Robert-er5wq
    @Robert-er5wq 2 місяці тому +64

    I have explained to my students the implication-arrow as a promise: my promise: if I win the lottery I'll buy a house.
    My promise isn't broken unless and until I win the lottery and don't buy a house.
    Seemed to make some sense at the time anyway

    • @prof.tahseen6104
      @prof.tahseen6104 Місяць тому +4

      I don't know when students learn logic in school in other parts of the world but in Türkiye, we learn it in 9th grade and my teacher thought that this way too.
      Also I think what we call those operators in Turkish are far more easier and practical. For example, the implication-arrow is "ise". This is a word that is used in the exact position in both a logical "sentence" and a normal sentence in Turkish.
      So when I say "Eğer ise " It can be directly translated to a logic statement -because it is exactly as you say it in a normal sentence- and vice versa.
      All other operators too are called their lingual counterparts.
      In English, you have "or" but when you want to call the new "exclusive version" of or a lingual equivalent, you can't. So you call it XOR.
      In Turkish, you also have OR -> "veya" and in addition to that, there is the word "ya da" -which is already commonly used in day to day life- which exactly means XOR.
      "Bunu veya onu seçebilirsin."
      "You may choose this one or that one."
      When you translate the sentence to English, it almost feels like you lose data. The word OR doesn't feel exactly like when you would use it in logic. But in Turkish it does and I like it.

    • @Robert-er5wq
      @Robert-er5wq Місяць тому +1

      @@prof.tahseen6104 I believe most languages have linguistic constructs that resemble logical operations. As far as I understand 'eger'-'ise' is just if-then which still should cause a certain surprise that we believe if A then B is true in particular when A doesn't hold.

    • @santerisatama5409
      @santerisatama5409 Місяць тому +2

      ​@@Robert-er5wq Turkish, Finnish and Latin have distinct words for OR and XOR, but that doesn't seem very common in natural languages.

    • @Robert-er5wq
      @Robert-er5wq День тому +1

      @@santerisatama5409 German for example has the entweder-oder construct which states explicitly an XOR. Most people understand OR (just 'oder') colloquially in the XOR sense. It surprises mathematically untrained people to hear a simple 'yes' to the answer of 'Would you like vanilla ice cream or chocolate ice cream?'

  • @dmace14
    @dmace14 2 місяці тому +48

    I prefer to think of the De Morgan law as
    a AND b = NOT(NOT(a) OR NOT(b))
    Which just reverses the statement on both sides but it feels more intuitive to me for examples like the rain and snow one
    If it’s raining and snowing, then it can’t be not raining or not snowing

    • @angeldude101
      @angeldude101 2 місяці тому +4

      The version given in the video works better when NOT isn't its own inverse, which for some extensions it isn't. The version you have requires a NOT⁻¹ in these cases.

    • @lythd
      @lythd 2 місяці тому

      thats how ive always thought of it, mainly because I learnt logic through Minecraft, and in Minecraft u make and gates like that

    • @3eH09obp2
      @3eH09obp2 2 місяці тому +2

      De Morgan law seems reminiscent of the matrix laws like (AB)T=BTAT and (AB)^-1=B^-1A^-1

    • @comma_thingy
      @comma_thingy 2 місяці тому

      @@3eH09obp2 That's because it (or at least something similar to it) emerges in general group theory, which binary logic and matrices are both members of (in the sense that they are groups)

  • @hashimoto128
    @hashimoto128 2 місяці тому +17

    Many of concepts shown in the video appear in a similar way in APL and other array programming languages descendent from it.
    (some keywords for searching: APL, jsoftware (J), BQN, Uiua, Dyalog, ArrayCast)

  • @JobBouwman
    @JobBouwman 2 місяці тому +135

    If we couple FALSE to -infinity and TRUE to +infinity, we lose the connection with the interval [0, 1] from probability.

    • @lrgui9792
      @lrgui9792 2 місяці тому +22

      yes, but that interval is not necessary, it's a choice. I wonder if everything would work perfectly if we define probability from -infinity to +infinity as well... i think that it doesn't work to have negative probabilities, only [0,+infinity]

    • @erichehner1587
      @erichehner1587 2 місяці тому

      You interpret infinity as certainly true, zero as equally likely to be true or false, and minus infinity as certainly false.@@lrgui9792

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +63

      That's a good point. No system is perfect of course. I think Hehner's focus is on computer science rather than probability.

    • @MasciLogre
      @MasciLogre 2 місяці тому +33

      It's fairly easy to map the reals the interval (0,1). Extend that so -inf goes to 0 and inf to 1 and you're golden.

    • @EulyDerg
      @EulyDerg 2 місяці тому +13

      ​i'm wondering which real number in particular a 75% chance would be in the interval [-infinity, +infinity], or even [0, +infinity].

  • @ducksies
    @ducksies 2 місяці тому +5

    You should look into Category Theory. It provides similar insights but has been studied much more intensively by Mathematicians around the world!

  • @cube2fox
    @cube2fox 2 місяці тому +6

    It seems Hehner interprets logic in terms of arithmetic, though in a different way than Boole. However, there is also the opinion that we instead should interpret arithmetic in terms of logic. Because logic is (it seems) more general and primary than arithmetic. Gottlob Frege defended this opinion. It's now called logicism.

  • @nektariosorfanoudakis2270
    @nektariosorfanoudakis2270 2 місяці тому +25

    I saw very few new ideas here, sorry, this is what happens if you study a little bit of type theory. Also the comma operator should also be associative, and only definable for finite bunches. There exist infinite permutations that are not the product of permutations of two elements.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      All of these are very good points.

    • @MadocComadrin
      @MadocComadrin Місяць тому +1

      The forall being reduced to a big-and bugs me for similar reasons. It's a Pi type while conjunction can be a Sigma type.

  • @twanvanderschoot9667
    @twanvanderschoot9667 2 місяці тому +21

    Delightful and very well presented, thanks! In my humble opinion, Hehner approach to math notation fits in the same tradition as Dijkstra, Feijen, van Gasteren (Eindhoven) and Meertens (Amsterdam)

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +7

      I will have a look at the authors you mentioned, thanks for the tip. If necessary, I can even read them in Dutch 😉

  • @jansustar4565
    @jansustar4565 2 місяці тому +12

    Havinf comma be an operator, also allows for adding items to the bunch. So nat,0 is the same as natural numbers union {0}

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +6

      That's a really good example. Although I think that Hehner already includes 0 in nat.

  • @mec1789
    @mec1789 2 місяці тому +3

    That new function notation is so intuitive and delightful and readable

    • @blue_sand6854
      @blue_sand6854 2 місяці тому +1

      I think the ”top-bottom” notation is absolutely useless. Moreover, the ”^” and down arrows overlap the anticonjunction and antidisjunction (nand and nor).

  • @aaronspeedy7780
    @aaronspeedy7780 2 місяці тому +9

    18:23 Haha! That is amazing! I thought of treating the comma as a generic distributive operator one time, and along with that forall function thing you described, I think this would be really useful. It solves a lot of issues with sharing code in statements I think, for example &&(a, b) < 5 instead of a < 5 && b < 5. Great video!

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +3

      Glad you liked it. And good to see that you're already thinking about applications in programming 😄

  • @helix-phase
    @helix-phase 2 місяці тому +18

    These are fascinating insights, thank you for making this video.

  • @bloom945
    @bloom945 2 місяці тому +21

    Very interesting. The connections to computer science are really strong. The 'forall' expression is just the all(...) function and the 'exists' is just the any(...) function. At the end when you discuss the new function notation, it is reminiscent of lambda expressions or anonymous functions. I might start using this function notation more often. Really cool stuff!

    • @eig5203
      @eig5203 2 місяці тому +3

      I think loads of CS comes from logic, so maybe that's why

    • @noobyfromhell
      @noobyfromhell 2 місяці тому +5

      In this case you'll be glad to learn that this is just a (rudimentary version of) the Martin-Löf type theory, which is isomorphic to the typed lambda calculus (the so-called Curry-Howard isomorphism): en.wikipedia.org/wiki/Intuitionistic_type_theory
      Programming languages like Standard ML or Haskell have been based on it since at least the 70s, this guy just didn't research it before publishing.

    • @SpencerTwiddy
      @SpencerTwiddy 2 місяці тому +3

      @@noobyfromhellThose last few words are false. You have conflated “dumb” and “ignorant”. You have also been rude.

    • @noobyfromhell
      @noobyfromhell 2 місяці тому +3

      @@SpencerTwiddy You're right, I've been very cranky yesterday, I fixed the comment.

    • @cube2fox
      @cube2fox 2 місяці тому +5

      Actually Wittgenstein also interpreted "every" as a long conjunction of all instances and "there is" as a long disjunction. The problem he later recognized is that those are not logically equivalent: A long conjunction of all objects does not logically entail that there are no counterexamples.
      For example, assume you have exactly three friends, Alice, Bob, and Charles. Then saying "My friend Alice and my friend Bob and my friend Charles are nice" is not logically equivalent to saying "All my friends are nice".
      Because the former could be true and the latter false. It could be that you have a fourth non-nice friend, even though you actually don't. Logically equivalent statements have to be necessarily equivalent, not just contingently.

  • @mikecaetano
    @mikecaetano 2 місяці тому +2

    De Morgan's Laws: The negation of a disjunction is the conjunction of the negations. The negation of a conjunction is the disjunction of the negations.

  • @user-kd3px6dh2x
    @user-kd3px6dh2x 2 місяці тому +18

    The colon operator reminds me of Sean Barrett's declaration syntax that's being used in the Jai and Odin programming languages (e.g. my_var : type = value). It's a nice system.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +14

      Many programming languages are converging on the colon for static typing (or type hints).

    • @zokalyx
      @zokalyx 2 місяці тому +9

      also rust, zig, python (optional), typescript, and I guess many more.
      (here I'm ignoring declaration keywords like let or const)

    • @alexanderlucas2659
      @alexanderlucas2659 2 місяці тому +14

      the colon for 'having type" (as in foo having type int = 5) is standard notation in type theory and much of mathematics.

    • @noobyfromhell
      @noobyfromhell 2 місяці тому +6

      ​@@alexanderlucas2659 Don't tell him, let's see how much of the 20th century math he can reinvent by reverse-engineering stuff he saw in programming languages :)

    • @user-kd3px6dh2x
      @user-kd3px6dh2x 2 місяці тому +6

      @@noobyfromhell Well, I did learn programming before I learned algebra, so yeah, that's the process.

  • @mathephilia
    @mathephilia Місяць тому +2

    Very neat video, thanks ! I think it deserves a part 2 showing how these algebraic structures relate from the point of view of category theory, since much of this could be used as a motivating example/illustration for category theory as well.

  • @GamingKing-jo9py
    @GamingKing-jo9py 2 місяці тому +3

    this notation has many similarities to ML-style type definitions, like using : for x being of type nat, as well as APL/BQN, with the tacit notation like which was used for de morgan's law. bunches act like arrays and sets like boxed arrays, i.e and array with the single element being another array. all arithmetic operations are pervasive i.e distributive over arrays.

  • @jx8148
    @jx8148 Місяць тому +2

    I studied this at Uni, I always found it amazing how "ordering points" (poset theory) could lead into modelling boolean logic (that being in a distributive and complemented lattice)

    • @AllAnglesMath
      @AllAnglesMath  Місяць тому

      We may do a few videos on order theory later.

  • @arctic_line
    @arctic_line 2 місяці тому +2

    This feels a lot like array programming languages (even to the point of having point-free being a concept in it since the beginning). Honestly makes me wonder how much of a role Hehner has played in the development of them (even if they still use syntax far more like Ada than Hehner's notation, which is a shame since it would probably make for a quite elegant introduction to people who are generally scared by just how terse array langs tend to be).

  • @IterativeTheoryRocks
    @IterativeTheoryRocks 2 місяці тому +1

    My goto superlative is ‘Marvellous!’ This, however, is Absolutely Marvellous!

  • @FireyDeath4
    @FireyDeath4 2 місяці тому +6

    Would you recommend showing this to young children? Like, I endorse teaching children about formal logic because it can help them understand the structure of the sentences and propositions people state, and possibly find fallacies like non-sequiturs more sharply. And critical thought is very important nowadays. These are all some of those things that are hard to grasp at first, because they're unfamiliar and numerous, but are really simple to understand once you remember it all. I guess I'm just wondering if it would be useful to introduce this unified system alongside the usual arithmetic and formal logic.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +2

      I think it's an excellent idea to teach logic to kids. I don't know which explanations would work better, but I can imagine that Hehner's is going to be a lot easier than some of the existing curricula. A multitude of different symbols can all be reduced to a much smaller set, which is always a good thing.

    • @kikivoorburg
      @kikivoorburg 2 місяці тому +3

      I think something similar applies to set theory. Set theory and logic together are the basis for all of modern mathematics, and the concept of “grouping stuff together” (sets) doesn’t feel any harder to understand for kids than arithmetic (which we do teach). How much easier would it be for people to understand set theory proofs in high school and university if we taught the basics of sets to young kids?

    • @FireyDeath4
      @FireyDeath4 2 місяці тому +1

      @@kikivoorburg Indeed. A lot.
      I just have a lot of other subjects that I want to be required material for young children, because they'd be useful in their lives and make them more highly-functional members of society. Aside from arithmetic and formal logic, they include things like language, critical thought, philosophy, psychology, social conduct, research, self-learning practices and practical abilities. These are all things that would be beneficial to pretty much every individual, not just the ones who intend to have occupations and hobbies more heavily involving mathematics. Learning about set theory that early on, when their calibrating minds are more malleable, would definitely help them to understand sets more easily, but it's just hard to tell how learning set theory would improve people beyond just making them good mathematicians.
      Obviously this is something I've been thinking about for a long time. Of course, all of those mandatory subjects would be the first stage of the curriculum, with the rest of them being presented for students to choose freely in the second stage. And students will be able to pass at their own pace, so they could get to the second stage at any time, and maybe they could still learn set theory early on. I looked up what the uses of set theory are, and, they're rather specific, so I'm not sure if they'd apply to everyone like the other subjects would. If the theory is compact enough to be taught alongside all of the other subjects and seamlessly integrated with arithmetic and formal logic in a way that makes them more intuitive, then, well, it could be good to include. But if it's benefits are specific and non-universal, it would probably mean that other subjects that are easy to learn and useful in specific situations warrant inclusion as well, and who knows how many of those there are.
      So, basically I'm interested to know if there's anything specific that would make it useful to the most universal human demographic.
      Yeah I just realised that talking about this in a UA-cam comment section isn't really gonna cut it. I'm sorry I made you read all that :P

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +1

      @@FireyDeath4 Don't feel sorry, you make many good points. In particular, I like that you place psychology in your list. I have often thought that this would be incredibly helpful, at least if it leads to resilience and understanding of what makes people tick, not to victimhood glorification.
      When I grew up, we actually learned (basic) set theory in high school. It was a new curriculum, but it has since been abandoned again because most students found it too abstract.

  • @APaleDot
    @APaleDot 2 місяці тому +6

    I don't think I like using the bare operator to express iteration over a set (or bunch). If they are different concepts, they should use different symbols. If I have an expression like 'a + b', I want that to unambiguously indicate the addition of 'a' and 'b', not get confused with summing up all the elements in 'b'.
    That doesn't mean we should use different symbols for every possible operation like we do now, but maybe there could be one symbol to indicate iteration and then you specify the operation. Like summation could be Σ(+) and multiplication could be Σ(×) and then De morgan's laws would be expressed like -Σ(∧) = Σ(∨-) which feels a lot more readable to me.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      This is a matter of taste. Personally, I like that the same symbol gets "overloaded" to play different roles. But you're absolutely right that this can also be overused, which makes things confusing.

    • @Kram1032
      @Kram1032 2 місяці тому

      I think the Σ (Sigma) and Π (Pi) make sense as soon as you know they stand for "Sum" and "Product", and similarly, ∫ is supposed to be a stretched S as it's a *continous* sum.
      ∀, ∃ , and ∄ are also pretty clear once you hear of them (for all, exists, doesn't exist)
      And most other N-ary operators are pretty straight forward:
      ⋀ is just a big ∧ - "all" is a bigger "and"
      ⋁ is just a big ∨ - "any" is a bigger "or"
      ⋂ is just a big ∩ - intersecting many sets is a bigger form of intersecting two
      ⋃ is just a big ∪ - joining (taking the union of) many sets is a bigger form of joining two
      and that logic typically applies across all our N-ary operators. You could do the same thing with ⊻ (xor) which basically is a parity check, or ⊗ and ⊕ which are used to build tensors etc.
      Most these various symbols are very intentionally similar in pretty uniform ways, suggesting relations between them. For instance, "∨" and "∪" for "or" and "union" are so similar particularly because they are "the same" (in the same way as "max" and "+" are the same thing as well!) but appear in different contexts. They talk about different underlying things and, despite being "the same" in a pretty clear abstract sense (they all have equivalent behaviors for the objects they are about), they may cooccur, making it confusing to reduce everything to a single notation whenever that happens.
      Sometimes, this is actually necessary. For instance, *technically,* when talking about sum between vectors in one vector space, and sum between vectors in another, that is *not* the same sum. Nor is it the same sum that you use for integers or real numbers.
      They are the *effectively* "the same" in that they share a bunch of key properties, but sometimes, the difference matters to the thing you are trying to talk about!
      The solution in such a situation is, to bestow indices to your operators, talking, for instance, about statements like:
      f: s→t
      aₛ +ₛ bₛ ≃ xₜ +ₜ yₜ
      saying the sum of objects in the input is isomorphic (but not necessarily identical) to the sum of objects in the output

    • @kikivoorburg
      @kikivoorburg 2 місяці тому

      The distinction could be made clear by particular use of parentheses as long as that’s defined clearly
      a + b = a add b,
      a (+b) = a (sum b)

  • @parkerstroh6586
    @parkerstroh6586 Місяць тому +1

    Liked, subbed, course of entire life changed.

  • @adiaphoros6842
    @adiaphoros6842 2 місяці тому +1

    This reminds me of how you can do modulo arithmetic with n-ary logical operations, if you impose an ordering on the logical values.
    An example using ternary (or 3 valued) logic:
    Ternary logic has false, middle and true as logical values. Using the following assignments:
    0 := false,
    middle := 1,
    true assigned := 2,
    there are operations that could be interpreted as (a+1) mod 3 ¹, (a+2) mod 3 ², (a+b) mod 3 ³ and so on, where a and b are the inputs.
    ¹ The logical operator with the following truth table:
      0 1 2
    0 1 2 0
    1 2 0 1
    2 0 1 2
    ² The logical operator with the following truth table:
      0 1 2
    0 2 1 0
    1 0 2 1
    2 1 0 2
    ³ The logical operator with the following truth table:
      0 1 2
    0 0 1 2
    1 1 2 0
    2 2 0 1

  • @Neptoid
    @Neptoid 2 місяці тому +3

    I love extended numbers

  • @melchiortod29
    @melchiortod29 2 місяці тому +3

    This is beautiful. I'm gonna try implementing it in whatever i do

  • @yan-amar
    @yan-amar 18 днів тому +2

    In electronics there are European symbols for logic gates on schematics. OR is ≥1, like you said, as soon as a value is more than 0, the result will be one. XOR is =1.

  • @the_allucinator
    @the_allucinator 2 місяці тому +4

    This feels like Haskell

    • @leo9463065
      @leo9463065 2 місяці тому +3

      And also APL, and I also agree about Haskell

  • @theograice8080
    @theograice8080 2 місяці тому +1

    The handling of Top and Bottom values for truthiness and falseness reminds me of a paper I read once, along the lines of: transreal numbers as a basis for paraconsistent logics.

  • @usurpvision
    @usurpvision 2 місяці тому +2

    This will revolutionize mathematics if it gets widely adopted.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      I don't know about that, but it certainly helps make logic much more understandable.

    • @Kram1032
      @Kram1032 2 місяці тому +3

      The underlying theory is already well known for *ages.* I *think* the generalized concept here is called a Heyting Algebra. Nothing revolutionary at all. As far as I can tell, all this does is unify some notation, in part borrowing from computer science and programming languages (especially when it comes to iterators such as the ".." notation) to make those already well-known connections more explicit in how they are denoted.
      Ultimately, which notation suits you best is going to depend on what you want to do. For instance, a completely different kind of notation useful in multilinear algebra (tensor arithmetic stuff) is Einstein sum notation. I think if you were to use the sum notation presented here, for the applications where Einstein notation is useful, it would actually be a downgrade.

    • @noobyfromhell
      @noobyfromhell 2 місяці тому +1

      @@Kram1032 Actually it's Boolean algebra in this case because of LEM, you can use {top, bot} as the subobject classifier and get completions this way as usual.

    • @Kram1032
      @Kram1032 2 місяці тому

      @@noobyfromhell I was thinking more general. Boolean Algebra is a kind of Heyting Algebra, right?

    • @noobyfromhell
      @noobyfromhell 2 місяці тому

      @@Kram1032 Sure, I just meant it's also a Boolean algebra in this case.

  • @vkessel
    @vkessel 2 місяці тому +1

    Ha, I had independently discovered this a few months ago and was going to write about it. Very interesting. Good video!

  • @pauldruhg2992
    @pauldruhg2992 2 місяці тому +1

    Unbelievable! Much obliged!
    Thats what I needed!

  • @angeldude101
    @angeldude101 2 місяці тому +3

    Hehner doesn't sound like a mathematician. He sounds like a programmer. XD
    On your initial video on subtraction, you mentioned that the Clifford Algebra symbols for meet and join should be assymetric because they're non-commutative. I however argue that it's a case like the Excluded Middle where it is fundamentally the same operation as the minimum and maximum operations given here, but commutativity is lost when extending the operations. (Though only sometimes; whether or not Clifford Algebra meet and join are commutative or anticommutative depends on the exact values being used.) This is also why I get annoyed that most Clifford Algebra libraries use ^ and & for meet and join when they should really be & and | to match their corresponding boolean operators.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      I agree that the link between algebra and logic should be reflected in the symbols.

  • @pajrc1234
    @pajrc1234 2 місяці тому +1

    Reading the UA PDF, I was like, how do you define codomain? Since the concepts of injectivity and surjectivity are really important. But you actually use the same notation f: X -> Y, and the reason for this blew my mind when it clicked:
    A function g "includes" a function f if f is defined on g's domain, and (while acting on g's domain) fx belongs to gx -- gx can be a bunch, so fx could just be one element (or multiple elements in that bunch).
    Constant functions are simplified: a> where D is the domain and a is a constant is written as D->a. the constant 0 function on R is R->0.
    But the output could also be a bunch. Maybe the function is a constant function with the output always being R=reals. Then (we will call it) g = R->R. Suppose f is defined on R and for all x in R fx is in R also. Then, by our definition of inclusion, f: g, or f: R->R. That is, f maps elements of R to R. It's absurdly elegant.

  • @ibite100
    @ibite100 2 місяці тому +3

    UA-cam algorithms suck. Waited this long to bring your channel to me. Great video!

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      Thank you, and welcome to the channel. You can start binge-watching now 😄

  • @bernardoherrera5217
    @bernardoherrera5217 2 місяці тому +2

    I was working out kind of the same tthing by myself, and I came to this following question: if the set builder notation implies a Domain, and the Domain is another set, then a Set is always definined as a subset of Set

    • @decare696
      @decare696 2 місяці тому +1

      You can look into the axioms of ZF to see what sets are defined a priori. In particular: The natural numbers and the power set of any set.

  • @kennyimammahardika3868
    @kennyimammahardika3868 2 місяці тому +28

    This is so cool. I've never heard of this before. Does it have a name? Where can I learn more about it?

    • @homomorphichomosexual
      @homomorphichomosexual 2 місяці тому

      also, it has a name - hehner (unhelpfully) named it universal algebra, but it seems hard to find links to outside of the papers. maybe i just didnt look hard enough?

    • @koenvandamme9409
      @koenvandamme9409 2 місяці тому +8

      There's a link to Hehner's homepage in the video description.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +12

      I think Hehner himself calls it "Unified algebra". Definitely check out his home page, it has tons of papers that are worth exploring. The link is in the description.

    • @noobyfromhell
      @noobyfromhell 2 місяці тому +5

      en.wikipedia.org/wiki/Lattice_(order)

  • @davidrichards1302
    @davidrichards1302 2 місяці тому +3

    x < 4 is not 'stricter' (having fewer solutions) than x < 6, because x = ∞. And since one can't define counting numbers without co-defining infinity, or at least assuming the symbolic meta-existence of infinity, '∞', it is valid to solve the inequalities in terms of ∞, and understand that the solution spaces of 'x < 4' and 'x < 6' can be put into one-to-one correspondence. This in turn brings into focus the priors of 'correspondence' and 'solution', etc. Perhaps there's are more underlying aspects to Hehner's system, which are not fully (re-)presented in this video. Absolutely fascinating stuff! Massive kudos to "All Angles" for this video!

  • @icew0lf98
    @icew0lf98 2 місяці тому +2

    the sigma notation for operations other than sum and multiply is expressed by writing the operation and below it a domain of the variable and then next to it the expression involving this variable, for example U_{x in O}(Ball_epsilon(x)), it's just a weird quirk I guess that they use the capital sigma and pi letters instead of big plus and big cross or dot

    • @icew0lf98
      @icew0lf98 2 місяці тому

      also for example U(A) where A is a set is the union of all elements of A, but of course in all of these examples, it is required that the given operation and the elements that it operates on forms an abelian group

  • @kikivoorburg
    @kikivoorburg 2 місяці тому +19

    It seems like this notation, even if it doesn’t replace the standard, would make an _awesome_ functional programming language like Haskell. I especially like the unification of functions / sums / products with the angle bracket notation
    Edit:
    I do have one question: why can’t we just consider the curly braces in sets as “mere notation”, making bunches and sets equivalent? After all, in things like nat^2 = (0, 1, 2, …)^2 we already used parentheses, so why not just say “the curly braces are the parentheses we use for sets”? Are there operations that behave differently on bunches than sets, making a distinction needed?

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +6

      Good question. Since many people have the same concerns about bunches vs sets, I posted some thoughts in the pinned comment at the top.

    • @quentinasparria7207
      @quentinasparria7207 2 місяці тому

      Isn’t this how apl was created

    • @M_1024
      @M_1024 2 місяці тому +5

      (a, b), c = (a, b, c) but {{a, b}, c} isn't {a, b, c}

    • @kikivoorburg
      @kikivoorburg 2 місяці тому +2

      @@AllAnglesMath Thanks!

    • @kikivoorburg
      @kikivoorburg 2 місяці тому +1

      @@M_1024 ah interesting, I hadn’t considered nesting / combining bunches.

  • @sebastianfia9541
    @sebastianfia9541 2 місяці тому +3

    Amazing video, keep it up!

  • @icew0lf98
    @icew0lf98 2 місяці тому +1

    to make the implication make more sense, consider a mathematical theorem(if you connect 6 noncolinear points, each with either a blue or a red pen, then you created a blue or a red triangle) and the possibility of the if_then_ that it states:
    -if you sufficed the conditions(p is true in p=>q) then it is possible for q to be true and impossible for q to be false
    -if you didn't suffice the conditions(p is false in p=>q) the it is both possible for q to be true and false

  • @rasuru_dev
    @rasuru_dev 2 місяці тому +5

    i love this so much

  • @Neptoid
    @Neptoid 2 місяці тому +3

    Thank you! I’ve heard of things like top and bottom as alternatives in the undefined cases. What something divided by zero could be for example. I didn’t know they connected to those combinatoric graphs

    • @jamescash4065
      @jamescash4065 2 місяці тому +2

      The graphs you refer to are called “lattices”. They are the foundation for much of mathematical Logic

  • @rajinfootonchuriquen
    @rajinfootonchuriquen 2 місяці тому +2

    if TOP is +inf and BOTTOM is -inf, how can implication be replaced by the ordering relation make sense? If we understand ordering relation as maps from sets to {true, false}, then your new algebra is not close under the 'implication' operation. For example: (TOP -> BOTTOM) = BOTTOM; if we replace by 'inf', it should be (+inf

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      It's an interesting exercise to try and model logical operators on numbers, the way you do with the symmetric operators '' and 'xor'. Reminds me of fuzzy logic.

    • @rajinfootonchuriquen
      @rajinfootonchuriquen 2 місяці тому

      @@AllAnglesMathit is fuzzy logic.

  • @ArslanRozyjumayev
    @ArslanRozyjumayev 25 днів тому +1

    Once you find the right path, you will see it in Everything!

  • @Neptoid
    @Neptoid 2 місяці тому +6

    I independently thought about those symbols being opposites/duals of each other. Good that they baked that in to the symbols

  • @notfancy2000
    @notfancy2000 2 місяці тому +1

    Reynolds, Dijkstra, UTexas, ISWIM, APL, relational parametricity, boolean calculus. Things to keep alive.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +1

      I'm taking note. Thanks for the many references. So many interesting things to study 😄

  • @lilpunkxd722
    @lilpunkxd722 2 місяці тому +2

    Wow best video man😮

  • @alurma
    @alurma 9 днів тому +1

    This is how JQ (JSON query) and Cue languages work as well

  • @Neptoid
    @Neptoid 2 місяці тому +2

    1:57 Yes! Because thinking of it largest and smallest at the same time is contradictory. A choice needs to be made

  • @user-kl1tr9lh1l
    @user-kl1tr9lh1l 2 місяці тому +1

    Thank you for the interesting video! I am already enjoying thinking about this topic. I always enjoy new notation for existing concepts, and I like the idea of treating the comma as a binary operator. I would imagine the sole advantage for using bunches instead of sets is that individual elements can be regarded as single element bunches, and I think this is something worthy of emphasis. In terms of distributivity, set theoretical algebra also has similar notation like N + {7} but this does not mean the same thing as nat + 7, the latter being an extension of the same plus operator used for real numbers, rather than an entirely new operator that takes sets as inputs.
    It’s interesting in that this is almost the entire opposite to the view from category theory. Bunches seems to be aiming for the un-packaging of sets, treating a collection of numbers not as a single entity in itself, but as a mere aggregation of elements. On the other hand category theory seems to package everything into higher structures, treating even elements of sets as functions, i.e. an element of a set A is simply a function from a single element set to A. I am fascinated to see entirely opposite approaches to extending existing concepts!
    On a very minor note, at 18:54 if talking about distributivity, I would imagine nat^2 to be the bunch of every number that is a product of two natural numbers (which is just nat), not just the square numbers, like how ideals from ring theory work.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +1

      You're right that a single element can be treated as a "singleton bunch".
      You're also right that distributing an exponent over a bunch is ambiguous. Other people have also pointed this out in the comments, and it is something that had escaped my attention when making the video. So thanks for pointing it out.
      Your remark about category theory is really interesting. In the videos about groups, I talk about "zooming out" from a group to its quotient group. I always imagine that if you keep zooming out, you end up with a single dot, and you end up in the world of categories.

  • @torb-no
    @torb-no 2 місяці тому +1

    This is really cool! Reminds me of some years ago when I just for fun tried to express lotic using arthemtic, but I never knew if could be done so elegantly!
    I have a question though. With the properties of the bunch is there even a need for a set at all?
    Either way, excellent video explaining these concepts, thanks for making and sharing it!

  • @thezipcreator
    @thezipcreator 2 місяці тому +7

    what about ordered pairs/tuples? if I have (1, 2, 3), the order there actually matters, so it can't create a bunch. it seems that in this system you'd need an entirely new notation for them (or I guess use vectors?)

    • @Vaaaaadim
      @Vaaaaadim 2 місяці тому +2

      Related, how does one represent a set of these tuples? Normally if we wrote ℕ³ for example that would represent tuples of 3 naturals, but in
      Hehner's notation nat^3 would be the set of perfect cubes.

    • @thezipcreator
      @thezipcreator 2 місяці тому +4

      @@Vaaaaadim I would think that set theory symbols such as ℕ are still defined, just that they're equal to sets and not bunches. since the ~ operator ungroups sets, maybe you'd write ~ℕ³ for a bunch of all 3-tuples of natural numbers? but that feels akward

    • @APaleDot
      @APaleDot 2 місяці тому +1

      Wouldn't an ordered set of values just be a list? [1, 2, 3]?

    • @Vaaaaadim
      @Vaaaaadim 2 місяці тому +1

      ​@@APaleDot Given two sets A, B how do you express in Hehner's notation a new set which is all possible ordered pairs (a,b) with a from A and b from B?
      In other words, the Cartesian product.

    • @APaleDot
      @APaleDot 2 місяці тому +1

      @@Vaaaaadim
      I don't know. I don't know Hehner's notation, I'm just another random commenter on this video.
      But in terms of programming, I think of an ordered set of values as a list, or a string.
      In that sense a set of ordered pairs is just a set of lists which each have two elements. I guess you can't use a comma ',' to separate the elements of the list because the comma is commutative, huh?

  • @alegian7934
    @alegian7934 2 місяці тому +5

    every one of these is an awesome concept! just one question: do we need to consider the order of elements for bunches? it seems like that might be useful for some reducer functions (just like the order matters in a complex for loop)

    • @jansustar4565
      @jansustar4565 2 місяці тому +1

      Probably no, since it still uses the comma operator, which is commutitive.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +4

      There's a separate theory for lists, which are ordered. They use the semicolon operator ';'.

    • @alegian7934
      @alegian7934 2 місяці тому

      that is... again pretty smart haha thanks

  • @___frosty-zx
    @___frosty-zx 2 місяці тому +4

    Cool

  • @JustAnotherCommenter
    @JustAnotherCommenter 2 місяці тому +2

    I've been inspired to apply arithmetic to binary logic thanks to this video! The unity between numbers and logic is possible, though I only considered integers and imaginary numbers to compensate with the lack of commutative and associative properties.
    Inverse Law:
    T = -⊥ = 1/⊥
    ⊥ = -T = 1/T
    Rotation Law:
    T +/× T = T -/÷ ⊥ = Ti | Ti -/÷ T = Ti +/× ⊥ = T
    T +/× ⊥ = T -/÷ T = ⊥i | ⊥i -/÷ ⊥ = ⊥i +/× T = T
    ⊥ +/× T = ⊥ -/÷ ⊥ = Ti | Ti -/÷ T = Ti +/× ⊥ = ⊥
    ⊥ +/× ⊥ = ⊥ -/÷ T = ⊥i | ⊥i -/÷ ⊥ = ⊥i +/× T = ⊥
    T Arithmetic with Integers:
    T +/× n = T | T -/÷ n = T
    n +/× T = ni | ni -/÷ T = n
    n -/÷ T = -ni | -ni +/× T = n
    -n +/× T = -ni | -ni -/÷ T = -n
    -n -/÷ T = ni | ni +/× T = -n
    ⊥ Arithmetic with Integers:
    ⊥ +/× n = ⊥ | ⊥ -/÷ n = ⊥
    n +/× ⊥ = -ni | -ni -/÷ ⊥ = n
    n -/÷ ⊥ = ni | ni +/× ⊥ = n
    -n +/× ⊥ = ni | ni -/÷ ⊥ = -n
    -n -/÷ ⊥ = -ni | -ni +/× ⊥ = -n

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      I applaud your interesting experiments. Definitely check out the references in the video description, where you will find many of these laws.

  • @ryanmcmanus7273
    @ryanmcmanus7273 Місяць тому +1

    17:45 the is element of operator is does have a direction though, so ":" is does not tell you which is the element. And this is important because sets can contain other sets.
    Though you can kinda solve rhis by saying it is always left-right (a.k.a. Left is in Right)

    • @AllAnglesMath
      @AllAnglesMath  Місяць тому

      You're right. Basically, the colon should be asymmetric. That would clarify the correct direction in which the expression must be read.

  • @santerisatama5409
    @santerisatama5409 Місяць тому

    Very interesting, thanks! Without LEM we get Finland: not(rain and snow) = "absolute value" of sleet, which is neither rain nor snow but has a quality of it's own. Less is more, giving up LEM we gain increase of qualitative distinctions.
    The holistic top (ie. not a sum of its parts) needs a distinction generating system. That's why holism - and transfinite, for that matter - are not coherent with LEM.

  • @GanerRL
    @GanerRL 2 місяці тому +9

    this is just, infinitely better than all current math notation ngl

    • @zokalyx
      @zokalyx 2 місяці тому

      ganer lets goo

  • @TheBrainn
    @TheBrainn 2 місяці тому +2

    first order logic os one of those things where you wont fully understand its holisiticity unless you teach it to yourself because it so meta-ordered that it comes from nature rather than nurture

  • @qexat
    @qexat 2 місяці тому +1

    13:09 I immediately thought about an and-reduction here

  • @GurshanParmar
    @GurshanParmar 2 місяці тому +3

    wow, interesting video. It got me thinking ,,, do we define nat² to be the bunch of square of natural numbers by our own choice or does it have a more natural explanation owing to other properties it has? Like could there be, if we wanted, another way of interpreting nat² as Cartesian product of an ordered bunch with itself?"' My doubt here is of the invention vs discovery of concepts. Would really appreciate if you could share your view of it

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      Invention vs discovery is a really important distinction. We want to avoid arbitrariness and ambiguity as much as we can.
      The example of "nat squared" might indeed be one of those ambiguous cases. Exponents are often tricky. Like when you say 'f to the -1' for some function f: do you mean the inverse function, or do you mean to invert each output value? I'm not sure how Hehner deals with this.

  • @jamesriordan5461
    @jamesriordan5461 12 днів тому +1

    This can have massive implications on how we design and interact with quantum computers.

  • @annaclarafenyo8185
    @annaclarafenyo8185 2 місяці тому +3

    This "system" is an example of a "boolean valued model" over a particular example of a boolean algebra, and it is used all the time in set-theory under the name "forcing conditions". The particular poset used in this system is not particularly special, it's just the intervals in R under nesting. You can use much richer posets to actually prove interesting theorems, like the independence of the continuum hypothesis. This is not new mathematics, it essentially dates back to Cohen and Kripke in the 60s..

  • @Alguem387
    @Alguem387 2 місяці тому +5

    This looks alot like a programing language

  • @arisweedler4703
    @arisweedler4703 Місяць тому +1

    Awesome!

  • @ayushsharma8804
    @ayushsharma8804 2 місяці тому +4

    did not get the point of the box operator, whats so different between a bunch and a set

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +1

      That's a fair point. I posted some thoughts about this in the pinned comment at the top.

  • @ichisichify
    @ichisichify Місяць тому

    seems more like a fancy art project than a meaningful extension of both logic and arithmetic

  • @davidhand9721
    @davidhand9721 Місяць тому

    Does the bunch issue make a difference in the self-reference catastrophe in set theory? Does, for example, the set of all sets that do not contain themselves contain itself if we call some part of the picture a bunch? If it does contain itself, then it shouldn't be in itself, but if it isn't, then it should be.

  • @reesetyra902
    @reesetyra902 2 місяці тому +1

    9:11 [bottom right] If x is larger than the smaller of y or z, then x is either larger than y or larger than z.

    • @reesetyra902
      @reesetyra902 2 місяці тому

      very fun!

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      This is a great attempt, but it's not quite right yet. Remember that the down-pointing arrow stands for "and", not "or".

  • @memyselfishness
    @memyselfishness 2 місяці тому +2

    I'm not understanding the benefits of a bunch compared to a set. What does unpacking really do for us? Also, it seems like this approach is not made for readability. How can we distinguish a bunch when we're using lists?

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +1

      Lists are ordered, bunches and sets are not. There's a separate (but similar) theory for lists and strings.

    • @noobyfromhell
      @noobyfromhell 2 місяці тому

      I read the actual axioms in the guy's book, and I strongly suspect that bunches are just classes with comma used for class unions, but honestly the whole logical system in the book is such a mess that I'm not sure about anything.

  • @PhilippMeierGermany
    @PhilippMeierGermany 2 місяці тому +2

    What is the input domain of the comma operator? Can it operator in single numbers _and_ bunches? Or do we need another (implicit) unary operator that lifts a single number into a (one member) bunch?

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +1

      A single number is already a singleton bunch.

    • @thefance4708
      @thefance4708 Місяць тому +1

      there's another thread where he answers that the point of distinguishing bunches from sets, is that putting something in a set represents lifting an object into a new space, whereas a bunch is just a universal space where things get aggregated. So bunches of bunches are always flattened into a single bunch. whereas sets of sets form a hierarchy of sets.

  • @tropin_tropin
    @tropin_tropin Місяць тому +1

    10:15
    Wouldn't the "arithmetic equivalent" of the law of excluded middle be something like
    max{ q, -q } ≥ 0
    ?

    • @AllAnglesMath
      @AllAnglesMath  Місяць тому

      That's an interesting approach, but what would it look like when you translate it back into logic? We don't have an equivalent for 0 in binary logic, so this expression is only possible in arithmetic, unless I'm missing something.

  • @paulpinecone2464
    @paulpinecone2464 2 місяці тому +1

    You could avoid {} by making Set a unary operator. So:
    *s* (1,2,3) is notation for a set.
    The parents are used merely in their ordinary priority way. A bunch is just 1,2,3 or (1,2,3) for clarity. Taking the contents out of a set is just s-¹ .
    However, comma is also used to represent ordered objects like coordinates and vectors. I would say use a different separator. Dot might be good because you are already using it for ranges. So we have:
    1.2.3..8.12
    Might want to make range 3 dots for better differentiation. Three dots are already used this way at the end of expressions to imply that they go to infinity.
    But then there's the issue of decimal points. There's probably a better way to represent that like a compact scientific notation:
    24'602 Avogadro's number
    Yes, the exponent gets bigger by one because the mantisas should be below one.

    • @christopheriman4921
      @christopheriman4921 Місяць тому

      I mean you could use bunches to define vectors and then have a standard multiplication defined on vectors (which are just bunches) where you distribute everything out and simplify and you get a bunch that contains every answer from multiplying all elements together once and defining the cross or dot or any other product is as easy as defining a function that selects the numbers you want out of that standard multiplication.

    • @paulpinecone2464
      @paulpinecone2464 Місяць тому

      @@christopheriman4921 The comma operator was commutative right? Therefore unordered. Vectors are ordered.

    • @christopheriman4921
      @christopheriman4921 Місяць тому

      @@paulpinecone2464 From my point of view you can swap around any of the numbers in a vector as long as you keep the coordinate data stored with the number as a unit.

  • @pmcate2
    @pmcate2 2 місяці тому +1

    1) The comma operation was creative, but do we gain anything by introducing bunches?
    2) I understand the + notation in context of the sum, but +f was confusing. This would just be infinity?

    • @AllAnglesMath
      @AllAnglesMath  Місяць тому

      1) Hehner's own PDF contains some good arguments for bunches.
      2) The plus operator is applied to all of the outputs of the function. If the domain of your function is the real numbers, then this obviously explodes. But if the domain is the positive integers, this would simply be a short notation for a series.

  • @adrianott5248
    @adrianott5248 Місяць тому

    At 14:50, the De Morgan law, I dont really understand the notation, as in are both the "and" operator and the "- (or not)" operator a part of the "this happens for every element in the output of the function"? Or does it matter? If you take 'and' 'not' f, is this the same as 'and' ('not' f)? 'not' f will be a sum of negative numbers, and not a function, so that shouldnt work, but in the other case, 'not' 'or' f, it seems as the right interpretation is 'not' ('or' f), as the 'not' isnt distributed over the elements in the domain of the function? And f in this case I guess is just x>?

  • @XahhaTheCrimson
    @XahhaTheCrimson 2 місяці тому

    More comprenehsible explanation for bunches in my opinion, is the lazy evaluators. If you're familiar with Javascript, it's something like Generator. As I understood, set is just a realization of such notion (or making it concretely). Generator in Javascript can only handle countable bunches, but in Eric's intention, I think uncountably infinite bunches also make sense.
    Though I'm not sure is worth it to distinguish bunches from set- Did he just want to separate them like von-neumann universe (related to ZFC axioms) to avoid circular dependency between two idea? I'm not sure.

  • @Hidra1isk
    @Hidra1isk 2 місяці тому

    18:55 num squared should give all products of nats, not the squares. Othervise, since the comma is commutative:
    {1, 4} = {1, 2}^2 = {1,2} * {2,1} = {2,2} = {2}

  • @Neptoid
    @Neptoid 2 місяці тому

    I also love degenerate cases and ultimate reductions (the monad in philosophy is pretty hefty). It reminds me of the progression in Superliminal

  • @noobyfromhell
    @noobyfromhell 2 місяці тому +9

    You *are* aware that this is just lattice theory, right? Right?!

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +1

      Yes, I am aware. As coincidence will have it, I'm reading a paper on Galois connections right now, because I want to use them to explain the matrix transpose in an upcoming video.

    • @noobyfromhell
      @noobyfromhell 2 місяці тому

      ​@@AllAnglesMath (UA-cam keeps swallowing my image links) Matrix transpose is essentially a metric concept, not order-theoretical one. Defining a matrix transpose (as an involution (-)^\dagger \in End(V^* \otimes V)) is equivalent to defining an inner product, you can prove it by defining a musical isomorphism x \mapsto \langle id_V^\dagger x, -
      angle using some other real inner product as a crutch. Defining a consistent family of matrix transposes on the entire category of finite dimensional vector spaces over R is also equivalent to defining an inner product on each (consider the case of one of the two vector spaces being just R), or to fixing a basis in each space, or to fixing an isomorphism R^{\dim V} \to V for each space. In either case it embeds the category of R^n for all natural n as a full subcategory of the category of finite dimensional vector spaces over R, and this embedding is an equivalence of categories (defined up to an isomorphism, obviously).
      Sure, you can then introduce a flag on each R^n as span(e_1) ≤ span(e_1, e_2) ≤ ... ≤ R^n, and the musical isomorphism will then define a Galois connection, i.e. an adjunction on this posetal category, and then in a sense you've defined a Galois connection using a transpose. But why would you use a theory that deals with adjunctions when it's actually an equivalence?

    • @jacobhgoldman
      @jacobhgoldman Місяць тому +1

      ** wow what a cool video! This seems to be the same as lattice theory, I think that would make a cool follow up video! **

  • @Ryu-ix8qs
    @Ryu-ix8qs Місяць тому +1

    Very nice

  • @Chr0micx
    @Chr0micx 2 місяці тому

    If we take the comma and the curly braces as functions and the comma works as commutativity as well as idempotency as shown at 16:50 , what happens to ordered tuples such as (2, 2) or (1, 3)?

    • @noa_1104
      @noa_1104 2 місяці тому +2

      As explained in another comment somewhere, in this system they would be separated by ; instead. A list instead of a bunch, whichever is ordered

  • @noobyfromhell
    @noobyfromhell 2 місяці тому

    BTW there’s another issue in the book: it defines nat by two axioms:
    1. 0 : nat
    2. 0, nat + 1 : nat
    This *almost* reproduces the Peano axioms, but N \cup {omega} is also a valid model (with the successor defined as 1 + n so that it’s absorbed by the first uncountable ordinal omega).
    It’s not fatal, but like I said, it’s sloppy, you don’t really get *the* natural numbers with the book’s axioms. You need to represent the induction principle directly to get rid of this ambiguity.

  • @imrematajz1624
    @imrematajz1624 Місяць тому

    If we van consider comma as a binary operator which is commutative, how can we define noncommutative operations such as matrix multiplication in this notation?

  • @alessandrorossi1294
    @alessandrorossi1294 2 місяці тому +1

    Is there any natural topology on logical statements that arises out of this connection?

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому +1

      Interesting question. Does anyone have an answer or a reference?

  • @remrevo3944
    @remrevo3944 Місяць тому +1

    10:13 But doesn't
    q max (-q) = |q| (which is still true under the new system)
    imply
    q or (!q) = T
    for the special case where q is either infinity or negative infinity (true/false)?
    So it is *still* a theorem when taken in the right context.

    • @AllAnglesMath
      @AllAnglesMath  Місяць тому

      Yes, it's still correct for true & false, but not for any other numbers.

    • @remrevo3944
      @remrevo3944 Місяць тому

      @@AllAnglesMath I mean yes, but the theorem q max (-q) = |q| is more useful anyway!
      And it's kind of intuitive; infinity and any other number don't *really* have anything to do with each other. So q or !q = T being subsumed by q max -q = |q| just makes sense!
      I just find it fascinating that those two systems unify so elegantly.
      T = infinity was extremely well chosen!

  • @PhysicsExplore
    @PhysicsExplore 2 місяці тому

    I have a very different idea that is unrelated to logic, but when I think about it, I feel it can have good application
    Instead of Taking the full angles of the circle as 360 degrees(since it is arbitrary) and read it as circle n
    where n is the largest prime number which the full angle of the circle is divisible
    for example: 360 degrees = circle 5
    2520 = circle 7
    I still have not studied analytic continuation, but yeah, extending the domain of this, would be quite crazy.....

  • @whig01
    @whig01 2 місяці тому +1

    What if we want an ordered list instead of an unordered bunch? Does he recommend ; instead of , or something else?

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      That's exactly right. The ';' operator creates sequences instead of (unordered) bunches.

    • @whig01
      @whig01 2 місяці тому

      @@AllAnglesMathCan you define this as an operator?

  • @joirnpettersen
    @joirnpettersen 2 місяці тому +1

    As cool as this is (and it is very cool), I do have a concern when it comes to infinity. We can only apply binary operators to a finite number of things, so what can we say about N={0,1,2,3,...}? This can't really be thought of as "applying the comma to infinity". The only resolution I can think of is that some operators can take a "set" or "bunch" of members and take them together, but this seems to degrade the elegancy this had in the finite case. For example, we can "take the maximum" of the bunch/set of statements that n is prime, which is fine because we can say "forall n", but we can't "take the maximum" of the set of real numbers of the form 1-1/n (of course we can take the sup and get 1, but then we lose the property that the min is in the set). One way to interpret this is that all of these concepts coincide nicely for finite collections, but the reason the distinctions exist is because infinite collections don't play as nicely.

  • @januszlepionko
    @januszlepionko 2 місяці тому

    @7:15 A misprint detected. For the TOP the infinity symbol should be with the plus sign but it is shown withou it. (The infinity without minus sign is not the same as the infinity with plus sign.)

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      Interesting. What is the difference between "infinity" and "plus infinity"?

    • @januszlepionko
      @januszlepionko 2 місяці тому

      @@AllAnglesMath Treating real numbers as a subset of complex numbers you will have on one complex plane: two -Inf (Re and Im), two +Inf (Re and Im), and one Inf without a sign.

  • @sobriquet
    @sobriquet 2 місяці тому

    Since nat, nat = nat, I don't get how we can express the bunch ~(N^2) as a construction of bunches. Any idea?

  • @rath60
    @rath60 Місяць тому

    I don't like the bunch set distinction. Because a set should mean nothing other than the idea that elements have been chosen .Further the comma can only mean asociativa as list are already divided by commas but may have repeating elements. Rather it would seem that there are bunching operators such that A*^2 ={ n in S such that n=a^2 given a in A} or A+7={n in S such that n=a+7 a in A} which is nice and compact while allowing us to still use sets and allowing for the common meaning of A^2= AxA or the set odf all ordered pairs (x,y) such that x in A and y in A.

  • @sucheta5379
    @sucheta5379 2 місяці тому

    Meanwhile an average student in India studying this in high school easily. This chapter is called mathematical reasoning in our syllabus and this is the most easiest chapter.

  • @EatChangmyeong
    @EatChangmyeong 2 місяці тому +1

    9:23 IMO law of the excluded middle is only a consequence of being rooted in two-valued classical logic, instead of the deeper order theory -- it is not a theorem in intuitionistic logic either.

  • @codecraftercc
    @codecraftercc 2 місяці тому +1

    Parts remind me of array languages like Uiua

  • @nerdycatgamer
    @nerdycatgamer 2 місяці тому

    11:53 honestly I'm confused how to resolve this idea with the idea of bunches/comma operator introduced at the end. It would be really nice if we could unify these, like saying our functions here return a bunch or something that we can then operate between all elements with a binary operator, but this doesn't track with the distributivity of the comma operator we establish.
    if anyone has any ideas I'd love to hear it, because these two concepts are both cool and elegant, but I think it seems pretty arbitrary unless we join them. The only other idea I could think of is if this '+' operator acting on the function acts on a set to add all of its elements, and our function here is returning a set, but then it doesn't really make sense to use a plus, and we would be better off using a new symbol like sigma again.
    I guess the operator is actually acting on a function, but then it doesn't really make sense to me how the '+' operator can be a binary operator between numbers, but a unary operator on functions. Again, it makes more sense to have a separate symbol like sigma.

    • @AllAnglesMath
      @AllAnglesMath  2 місяці тому

      In many programming languages, names and symbols can be overloaded to take different input types. So in practice, using the same '+' symbol for a binary operator on numbers *and* for a unary operator on functions is not such a big problem.

  • @bytesandbikes
    @bytesandbikes 28 днів тому

    This feels like it would work well with posits/unums

    • @AllAnglesMath
      @AllAnglesMath  28 днів тому +1

      What are posits and unums? They sound like creatures from a Harry Potter novel 😄

    • @bytesandbikes
      @bytesandbikes 28 днів тому

      @@AllAnglesMath They are a kinda hyperbolic floating point number system that was in the compsci news a few years ago. By John Gustafson. His website is broken, but the presentation PDF is still accessible: johngustafson.net/presentations/Unums2.0.pdf
      Although, looking back at that, infinity is unsigned... so maybe not so good!