Sheafification of G
Sheafification of G
  • 13
  • 445 182
Kan Academy: Introduction to Limits
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/GSheaf/ . You’ll also get 20% off an annual premium subscription.
This video gives a (gentle?) example-driven introduction to limits. Pacing might be fast, but that's because you have a pause button ;)
__________
Timestamps:
00:00 - Introduction
00:47 - Getting started
01:22 - Sets
02:07 - Meaning through functions
03:39 - Elements as functions
04:10 - Cartesian product
05:13 - Two-of-a-kinds
06:12 - Fibre product
06:38 - Two-of-a-kinds, for real this time
07:47 - Equaliser
09:34 - Vector spaces
10:49 - Elements as linear transformations
11:53 - Cartesian product of vector spaces
12:21 - Kernel of a linear transformation
12:54 - Something more complex
14:34 - Elements as chain maps
15:12 - Fibre product of chain complexes
16:16 - Universal properties
18:39 - Abstract shapes
19:06 - Constructing a diagram
20:11 - Cone of a diagram
21:10 - Limit cone
22:01 - Thx 4 watching
22:21 - Existential fine-print
23:18 - Building blocks of limits
24:06 - Recipe for limits from building blocks
25:42 - Merci d'avoir regardé !
__________
This video was sponsored by Brilliant.
Переглядів: 9 147

Відео

One second to compute the largest Fibonacci number I can
Переглядів 300 тис.Місяць тому
Most of us are familiar with the Fibonacci sequence. What's the largest Fibonacci number you can compute in 1 second? I'm not setting any world records, here; I don't own a supercomputer. You can criticise my code here: github.com/GSheaf/Fibsonicci Addenda: At 7:59, the e_{01}s in the bottom row are incorrect... [in my defense, the Fibonacci transition matrix is symmetric]. Thanks @andykhang404...
2Fast2Finite: Breaking the natural speed limit of finite numbers
Переглядів 22 тис.2 місяці тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/GSheaf/ . You’ll also get 20% off an annual premium subscription. In a previous video, we used infinite ordinals to prove that certain finite number sequences called Goodstein sequences were necessarily finite. Now, let's take this one step further and derive a formula for computing the precise length of these...
Solving a finite number problem using infinities
Переглядів 15 тис.3 місяці тому
Here's a problem about sequences of finite (natural) numbers, subject to a simple finite rule, and all of these sequences have finite length... but we can't use the elementary language of finite numbers to prove this! Recommended prerequisite: ua-cam.com/video/dFsa4VeZ0cU/v-deo.html Addenda: (4:07) This argument might seem like a complete proof, but it's incomplete! Pure base expansions of a nu...
Infinite numbers have only finitely many (nonzero) digits
Переглядів 21 тис.3 місяці тому
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/GSheaf/ . You’ll also get 20% off an annual premium subscription. Finite numbers can be represented with finite strings of (decimal) digits, but what happens when we try to imitate this representation in the world of infinite numbers? In the world of ordinals, this can be done, and it turns out that there is a...
What is a Comonad? - Comath and Mputer Science
Переглядів 7 тис.3 місяці тому
Despite being formally dual to monads, they don't seem to be "all the rave" like monads are. Just because you get 'em by just reversing some arrows doesn't mean that comonads aren't independently interesting! Suggested prerequisite: ua-cam.com/video/roP_HC7tiXw/v-deo.html Errata: At 2:14 (definitely not a mistake, I swear), the volume is 2-dimensional thanks @drdca8263 At 5:21, the blue "extend...
(Provably) Unprovable and Undisprovable... How??
Переглядів 16 тис.4 місяці тому
No matter how hard we try to axiomatise mathematics, there will always be strong, independent propositions that don't need no proofs... but how do we *show* that a proposition can't be proven nor disproven? Timestamps: 00:00 - Motivation(al) 01:14 - What is logical independence? 02:47 - An axiomatic foundation of "integers" 04:45 - A provable proposition 05:36 - An unprovable proposition 06:29 ...
Can programmers do math? What is a real number, really?
Переглядів 11 тис.4 місяці тому
Sure, integer arithmetic on a computer may lead to overflows, but this behaviour is manageable and easy to predict... why do programmers settle for the wacky artefacts of floating-point arithmetic? Timestamps: 00:00 - Introduction 00:36 - What is a real number? 02:02 - Why bother with infinite precision? 02:51 - Implementing real numbers 03:37 - Technical interview 04:04 - Naïve implementation ...
Can Mathematicians Code? The Intermediate Value Theorem
Переглядів 12 тис.4 місяці тому
The IVT is introduced in every first-year differential calculus course, and gives a way of proving the existence of solutions to various equations... but does it say anything about how we can algorithmically find such a solution? Mathematicians don't program, they prove... but can we extract algorithms from proofs? Timestamps: 00:00 - I want to apologise 00:43 - What is the IVT? 01:52 - Element...
A shallow grip on neural networks (What is the "universal approximation theorem"?)
Переглядів 7 тис.4 місяці тому
The "universal approximation theorem" is a catch-all term for a bunch of theorems regarding the ability of the class of neural networks to approximate arbitrary continuous functions. How exactly (or approximately) can we go about doing so? Fortunately, the proof of one of the earliest versions of this theorem comes with an "algorithm" (more or less) for approximating a given continuous function...
What is a Monad? - Math vs Computer Science
Переглядів 22 тис.4 місяці тому
Monads look pretty different in math vs in programming... what exactly are they? (Don't say they're just monoids in the category of endofunctors,... I mean it.) Timestamps: 00:00 - Introduction 00:40 - What is a monad? 01:48 - What is a monad to a functional programmer? 03:28 - The state monad 04:39 - The maybe monad 05:22 - Recipe for a monad 06:21 - What is a monad to a category theorist? 07:...

КОМЕНТАРІ

  • @pepsalt
    @pepsalt 10 годин тому

    727 wysi

  • @ryandupuis5860
    @ryandupuis5860 11 годин тому

    You sound like you're talking through clenched teeth.

  • @TH-tp4pn
    @TH-tp4pn 15 годин тому

    The reason I opened this video is to check my understanding on limits after graduating from university. And the reason I finished this video is I understand none of it but enjoy the show

  • @JR13751
    @JR13751 19 годин тому

    5:54 but limit of something like {5+1, 5+2, 5+3, .......} is same as limit of {1, 2, 3, 4, ......}.

    • @SheafificationOfG
      @SheafificationOfG 17 годин тому

      Yes, that's why 1 + omega is not the same as omega + 1.

  • @nohackse1i403
    @nohackse1i403 19 годин тому

    Yea, I understood none of this, I watched it though…

  • @jacobtheranga
    @jacobtheranga 23 години тому

    I want to see how efficiently I can make the algorithm by writing it as optimised as physically possible in x86-64 assembly.

  • @darkfllame
    @darkfllame День тому

    haha, 128 bits integer go brrr

  • @legendgames128
    @legendgames128 День тому

    I learned about while watching a guide for "Please, Don't Touch Anything" of all places lmao

  • @winterfox3572
    @winterfox3572 3 дні тому

    i saw it

  • @df-163
    @df-163 3 дні тому

    Coming next: Kan't Academy where we discuss the Spectral theorem for Commutative C* algebras

  • @gabvmon
    @gabvmon 5 днів тому

    cool

  • @wahidislamlinad
    @wahidislamlinad 6 днів тому

    yours is the only channel that doesn't come to my feed still i intentionally search it up regularly just so i don't miss a video.

    • @SheafificationOfG
      @SheafificationOfG 6 днів тому

      Damn, UA-cam out there shadowbanning me or smt?

    • @SheafificationOfG
      @SheafificationOfG 6 днів тому

      Damn, UA-cam out there shadowbanning me or smt?

    • @wahidislamlinad
      @wahidislamlinad 4 дні тому

      @@SheafificationOfG if you're getting ad revenue than most probably not. i know having a low bit rate in a video deem it to be low quality so the "holy algorithm" tends to push less the video, but I'm not sure if audio also has the same effect. Although personally speaking I feel very pleased hearing the fan noise in background. It's quite soothing tbh.

  • @ZoDoneRightNow
    @ZoDoneRightNow 6 днів тому

    My immediate guess at how you would optimise the bad recursive version was to use a tail recursive algorithm.

  • @akirakato1293
    @akirakato1293 8 днів тому

    Using closed form of fibonacci recurrence was my first thought and you implemented it using benet's formula which was well done. Upon doing some research i came across the doubling recurrence relationship which would be O(logN) time and space complexity, have you tested that in 1 second time?

    • @SheafificationOfG
      @SheafificationOfG 8 днів тому

      Doubling recurrence is still bounded by your choice of multiplication algorithm. It does perform O(log n) *steps*, but that's no different (asymptotically) than the other algorithms presented in the video. If you use FFT, the overall performance is still O(n log n).

  • @DylanEdd_1
    @DylanEdd_1 8 днів тому

    watching videos like these makes me realise how bad I truly am with comp sci/algorithms :D time to hit the books as my brain promptly exploded when the moon runes were shown.

  • @joaomanoellima5947
    @joaomanoellima5947 9 днів тому

    Really interesting topic. I would prefer if the color coding on the graph remained consistent between runs.

  • @ai_serf
    @ai_serf 10 днів тому

    after learning a bunch of mathematical logic and getting into haskell and lambda caluclus and type theory, this whole thing now makes a lot of sense. (i was an expert c++ programmer, so i'm familier with impure functions or functions that obfuscate their state).

  • @n3ppy632
    @n3ppy632 10 днів тому

    5:29 😭🙏☠️

  • @TRD6932
    @TRD6932 12 днів тому

    I fucking love watching videos that delve into topics that I clearly don't/shouldn't understand I don't even know how this crept into my recommended. But I love this

  • @2Sor2Fig
    @2Sor2Fig 12 днів тому

    Mhmm, you hit a spot in my soul I never knew existed. All I have is this sub and this like. The love I give you freely. This was a wonderful experience.

  • @jamesbeagin9068
    @jamesbeagin9068 13 днів тому

    I watched this about when it came out and was thinking about it for a while. In English class one day, I drew a chart that had a few low Fibonacci numbers each arranged diagonally to the previous and above the one two before it. I realized that each F(n) is equal to the sum of two lesser consecutive Fibonacci numbers each multiplied by another set of consecutive Fibonacci numbers, where the larger two are grouped and the lesser two are grouped. In the context of this video, if F(0) = 0, (F(n))^2 + (F(n-1))^2 = F(2n-1). 8^2 + 13^2 = 233. From the matrix point of the video on, I didn't have a good idea of what math all of those symbols actually signifies, so if its this then okay I guess. But I would think that this algorithm would run faster than O(n log n), and if its not necessary to compute all of the numbers along the way, i would expect it to be very efficient. In pseudocode: OldSmall = 0 OldMedium = 1 OldBig = 1 StartTime = time TimeAlotted = 1000 loop( NewSmall = OldSmall^2 + OldMedium^2 NewBig = OldMedium^2 + OldBig^2 NewMedium = NewBig - NewSmall if( time - StartTime > TimeAlotted ){ output( OldBig ) } OldSmall = NewSmall OldMedium = NewMedium OldBig = NewBig ) Looking at it, the computation for computing F(n) would be approximately 8x (4/1 + 4/2 + 4/4 ...) squaring a ~((n log phi)/2) digit number, assuming addition and moving around are numbers are trivial operations compared to multiplication. Wikipedia says that multiplying 2 n-digit numbers is O(n log n), so I guess it didn't get anywhere. Watching back, I think i understand the matrix section of the video more. He was doing something similar to this, but with a matrix instead of integers (which i still dont understand), and then just optimising the multiplication (until it becomes O(n log n))(Which i also dont understand!). Since both ways have the same rough complexity, which one is actually faster? Right before I comment this, that 8x can be improved to 6x since OldMedium^2 is computed twice.

    • @SheafificationOfG
      @SheafificationOfG 8 днів тому

      You seemed to have already arrived at this conclusion, but just to confirm, your algorithm is indeed asymptotically limited by the multiplication algorithm, as with the algorithms I presented. That being said, there are constant-time performance gains to be had with this, if well-implemented, like you mention! Wikipedia saying multiplication is O(n log n) alludes to the FFT approach I was talking about (though, in reality, a simple FFT is only O(n log n log log n); it takes additional cleverness to shave off that extra factor so you get O(n log n) overall).

  • @nachocheese5311
    @nachocheese5311 13 днів тому

    I never understand anything but still watch these vids, makes me feel smart 😂

  • @Henrix1998
    @Henrix1998 14 днів тому

    Are you calculating fibonacci numbers up to or just the highest value in a second?

    • @SheafificationOfG
      @SheafificationOfG 14 днів тому

      I'm just computing the highest value. (In the graph, each index is computed "from scratch" to show the growth rate.)

  • @AlisterCountel
    @AlisterCountel 15 днів тому

    One thing I’m still a bit confused on is how this all plays with the law of excluded middle. If we take it as axiom, then these statements have truth value. Should be think of them being simultaneously true and false (until we create a model where we have chosen a specific value)?

  • @CoenBijpost
    @CoenBijpost 15 днів тому

    Law of naming: Mathematical discoveries are named after the first person AFTER Gauss to have discovered them 😂

  • @reshift2245
    @reshift2245 16 днів тому

    Ofc my ignorance watching a video on computer science and not expecting the Fourier Transform

  • @badatlosingvideogames1130
    @badatlosingvideogames1130 16 днів тому

    Why is the X axis not the time? Just curious

    • @SheafificationOfG
      @SheafificationOfG 15 днів тому

      Because the independent variable is the Fibonacci index, and time is the result.

    • @badatlosingvideogames1130
      @badatlosingvideogames1130 14 днів тому

      @@SheafificationOfG ohhhh ok, thanks! I was always taught that time should be on the X axis, never why its there

  • @OmarShukurov
    @OmarShukurov 16 днів тому

    incredible work. watched it unpaused

  • @Yusuketh443
    @Yusuketh443 16 днів тому

    hi :3 UwU

  • @ZachariahSchwab
    @ZachariahSchwab 17 днів тому

    Can’t you use 64 bit integers to store just the mantissa? Alternatively why not use Kahan Summation Algorithms? Both of those should allow you to traverse many more orders of magnitude with the gold medalist.

  • @nose766
    @nose766 17 днів тому

    As a physicist (person who can't math or program), this is lovely. I understood like 75% of each part

  • @LaugeHeiberg
    @LaugeHeiberg 17 днів тому

    6th category 16 yo, idk what I'm doing here, never even heard of a monad, might be able to waste some time

  • @energy-tunes
    @energy-tunes 17 днів тому

    Idk man ill just write backend code in c# wfh and earn six figs but you have fun with this alright

  • @energy-tunes
    @energy-tunes 17 днів тому

    Brah rhis linear algebra thing is too hard i immediately thought of Binet and thats it

  • @smorollcancel
    @smorollcancel 18 днів тому

    I have this small (but prob very improvable) function in Haskell that calculated a 10^(300,000) number in 3 seconds. Might be interesting still because this uses tail recursion (+ compiler optimization) and is O(n) as far as I'm aware. fibTail n = aux n 0 1 where aux n !a !b | n == 1 = b | otherwise = aux (n-1) b (a+b)

  • @cmilkau
    @cmilkau 18 днів тому

    The last proof only works because of well-ordering of ordinals, which prohibits infinite descending sequences. Without that, this kind of induction can loop back on itself to infinite depth, and while it would still prove the existence of an expansion, it would not prove its finiteness.

  • @rohandhar967
    @rohandhar967 19 днів тому

    The “linear” Fibonacci’s runtime should really be N*log(N) and not N^2. Adding two integers is O(log(N)) as the number of digits is of the order log N, not N.

    • @SheafificationOfG
      @SheafificationOfG 19 днів тому

      As mentioned in the pinned comment, the Nth Fibonacci number has O(N) digits.

    • @rohandhar967
      @rohandhar967 17 днів тому

      @@SheafificationOfG Ah, great explanation in that case. Lovely video :)

  • @andrefreitas9936
    @andrefreitas9936 19 днів тому

    good content

  • @piotrkawaek6640
    @piotrkawaek6640 19 днів тому

    Great video, but wouldnt it be better to do the fft on the complex numbers symbolickly, instead of doing aproximations?

    • @SheafificationOfG
      @SheafificationOfG 19 днів тому

      Good question, but symbolically juggling sums of complex exponentials is a lot more *complex* than it sounds, and wouldn't be a source of optimisation. This is why people either work with fixed-precision (dependent on input size) arithmetic, or the number-theoretic transform.

  • @hezuikn
    @hezuikn 19 днів тому

    got to fib iteration 530000 personally

  • @BharmaArc
    @BharmaArc 21 день тому

    This video had me smiling from the math and references all the way through, loved it :^)

  • @senlim8461
    @senlim8461 22 дні тому

    Appreciate the joke where life lesson numbering starts at 0 :D

  • @Vodboi
    @Vodboi 23 дні тому

    25:05 I just love when sentences suddenly begin consisting of very cromulent words.

  • @yaetrna9740
    @yaetrna9740 23 дні тому

    Wysi

  • @user-ul6pt9vi5q
    @user-ul6pt9vi5q 23 дні тому

    Let's consider a vector space of all sequences satisfying Fibonacci recurrence(which obviously has dimension 2), and operator V of left shift by 1 (forgetting the first element). This is obviously a linear operator, satisfying charpoly equation V^2 - V - 1. So computing n-th Fibonacci number reduced to computing V^n and applying it to vector (F_0, F_1) to get (F_n, F_(n+1)), and all you need to do is to compute x^n modulo x^2 - x - 1, if it is represented by class ax + b, then a = F_n is your answer. It may not give performance boost for Fibonacci sequence, but it is really important on higher degree recurrences. For example, if you have recurrence degree d, and you want to compute n-th term, assuming all arithmetic takes constant time (i.e over finite field) then matrix approach with fast powering gives O(d^3 log n) and surely no better than O(d^2 log n), while this approach(computing x^n modulo charpoly) gives O(d log d log n) using fast polynomial multiplication. Moral: don't represent linear operators in coordinates, unless you really need it. Choosing basis is bad option.

  • @lpsyduck
    @lpsyduck 23 дні тому

    17:27 did you seriously fucking put wysi in the tag because of the 727? I can't escape osu even when I'm trying to learn about algorithms??

    • @Lance0
      @Lance0 2 дні тому

      they did 💻👈

  • @mxunin5074
    @mxunin5074 23 дні тому

    University course class recording as UA-cam content Gen z might be the most educated generation

  • @CMan185
    @CMan185 24 дні тому

    5:30 say that again

  • @ferenccseh4037
    @ferenccseh4037 24 дні тому

    Now run it in parallel bc that makes *everything* faster. No exceptions.