How Branch Prediction Works in CPUs - Computerphile

Поділитися
Вставка
  • Опубліковано 19 гру 2024

КОМЕНТАРІ • 127

  • @llamallama1509
    @llamallama1509 7 місяців тому +159

    I like the little anecdote at the end about the ray tracer and changing a test to gain a big speed boost.

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +15

      Thanks! It was a real head scratcher! I've tried to post a link to where I discussed it more but it seems to have been filtered, but..if you look around I've got a talk where I go into more detail.

    • @NoNameAtAll2
      @NoNameAtAll2 7 місяців тому +1

      ​@@MattGodboltcan you give the name of the video to search?

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +4

      ​@@NoNameAtAll2I tried that with my first reply that seemed to get removed but..."path tracing three ways" should get it. The bit is near the end :)

    • @luketurner314
      @luketurner314 7 місяців тому +1

      @@MattGodbolt Is it the one with v=HG6c4Kwbv4I (that's a UA-cam video ID that goes at the end of the URL)?

    • @3rdalbum
      @3rdalbum 7 місяців тому

      I don't write any software that's time-critical or where there aren't a million other uncontrollable slowdowns (my stuff runs on shared cloud services) but I found this anecdote really opened up my horizons and my way of thinking about programming.

  • @axelBr1
    @axelBr1 7 місяців тому +85

    In awe of the people who came up with such simple ideas to do branch prediction. And the people that work out how to build that logic in silicon, so that it can run in one click tick are gods!

    • @jeromethiel4323
      @jeromethiel4323 7 місяців тому +14

      A lot of this type of stuff can be laid at the feet of Cray computers. They invented a lot of this type of tech back in the 70's and 80's. The fact that this is in pretty much ALL modern CPU's is a feat of engineering.
      I may be wrong on details, but i am pretty sure Cray did groundbreaking work on branch prediction, deep pipe lining, out of order execution, and the like.

    • @olhoTron
      @olhoTron 7 місяців тому +4

      Coming up with the idea is "easy", actually implementing it is the hard part

    • @monoamiga
      @monoamiga 6 місяців тому

      @@jeromethiel4323 Cray was an unquestionable genius. Often underappreciated IMHO.

  • @kevincozens6837
    @kevincozens6837 7 місяців тому +52

    Kudos to the host for tending to ask very good questions about the topic being discussed.

  • @rudiklein
    @rudiklein 7 місяців тому +38

    Being able to explain a complex technical subject in a way I can understand is an amazing skill.

  • @elirane85
    @elirane85 7 місяців тому +10

    During my CS degree we had some classes about CPU architecture and pipelines, I was always impressed at how complicated the things we take for granted are actually are and what we studied was very very basic things, not even close to the magic which is Branch Prediction

  • @aaronr.9644
    @aaronr.9644 7 місяців тому +8

    I've been programming for a very long time but I didn't realise how sophisticated these branch predictors could get. The idea that it can compute a simple hash in a single clock cycle and use that to capture patterns is fascinating. Now that makes me want to go look into the details of some of these open CPU designs :)

  • @JamesHarr
    @JamesHarr 5 місяців тому +1

    This was an amazing explanation of branch prediction. I've been in tech for more of my life than not and I've known that branch prediction was a thing, but could not fathom how it worked even after some reading online and this made it approachable. Thank you :)

  • @eloniusz
    @eloniusz 7 місяців тому +7

    Branch prediction is why there is a lot of algorithms that work faster on sorted data even if the order of elements theoreticaly doesn't matter for this algorithm.

    • @custard131
      @custard131 7 місяців тому +1

      im sure it helps but its not the sole reason
      one of the big benefits of sorted data is it allows for binary search, the best example low tech example would be something like a phone book or dictionary, you can jump to the middle page, know if the thing you were searching for is earlier or later in the data and then discard the other 1/2, then repeat the process, if you had a list of everyone alive on earth it would only take 33 steps to look up anyone, compared to if the list wasnt sorted the worst case would take 8 billion steps. having the list sorted would make it ~200 million times faster, even without any fancy cpu tricks
      there are things with nested loops where there can be performance gains from having the resulting loop be aligned to how the data is stored in memory, eg in graphics programming with storing pixel data for each pixel it can make a big difference if you loop over it row by row or column by column which i guess branch prediction comes into though i thought was more down to the memory/storage controllers than the cpu pipeline

  • @baltakatei
    @baltakatei 7 місяців тому +19

    Side note: Branch prediction is incompatible with keeping memory secret. Disable branch prediction when handling secrets.

    • @pierreabbat6157
      @pierreabbat6157 7 місяців тому +2

      In goes branch prediction, out comes secret.

  • @570rm-8
    @570rm-8 3 місяці тому

    love this explanation - plain and simple!

  • @bjryan19
    @bjryan19 7 місяців тому +9

    As a software developer I'm wondering how you optimize for branch prediction when the cpu is effectively a black box. I guess you can only speculate that you are getting branches wrong or maybe there is a cpu setting to store branch prediction hits and misses?

    • @thewhitefalcon8539
      @thewhitefalcon8539 7 місяців тому +17

      Modern Intel CPUs are chock-full of performance-related counters, actually.

    • @Stdvwr
      @Stdvwr 7 місяців тому

      vtune

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +10

      On Linux `perf stat -- program goes here` like `perf stat -- ls -l` (or whatever). I had that cued up to demo but the conversation went in a slightly different direction :)

    • @mcg6762
      @mcg6762 7 місяців тому +1

      As long as your branch has some kind of pattern to it, the CPU will do decent prediction. If the branch is completely random the CPU will miss half of the time and you are better off trying to rewrite the code to branchless, for example by using the conditional move instruction. You can often persuade a compiler to produce branchless code by using the C/C++ ? : operator in clever ways.

    • @KydEv
      @KydEv 7 місяців тому

      In C++ you can add [[likely]] and [[unlikely]] attributes in the code. The compiler is then supposed to do that optimization for you if he wants (basically)

  • @ArneChristianRosenfeldt
    @ArneChristianRosenfeldt 7 місяців тому +4

    IBM managed to slow down their mainframe using branch prediction. How often do you have JMP (else branch) ? DSPs just had zero overhead loop instructions similar to the one in 80186 . So at the start of the loop you have an instruction where the immediate value says from where to jump back to here. Only needs a single register, not a table. Works on inner loops.
    And then there is hyper threading, where you fill the pipeline with the lower priority thread instead.
    No need for speculation or attack vectors.
    ARM TDMI in GBA had a flag to instruct it to follow up branches. But how does it know that there is a branch before decoding? So it still needs a cache: 1 bit for each memory address to remember an up branch. At least this is well documented, and the compiler can optimize for it.
    Even with this nice prediction: why not follow both paths with a ratio. One cycle to advance this path, 3 for the other. Stop at Load / Store to avoid hacks or inconsistencies.
    PS3 showed the way: more cores, no CISC like optimization per core. Similar today with GPUs and their RISCV cores.

  • @nefex99
    @nefex99 7 місяців тому +1

    Very cool - great, understandable explanation!

  • @MateoPlavec
    @MateoPlavec 7 місяців тому +3

    I'm _predicting_ that that the one character change was from a short-circuit && to a bitwise &. The former might be compiled as two branch institutions, while the latter as only one.

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +3

      Bingo. Well in this exact case a || to a |. And it wasn't 100% effective; sometimes the compiler still decided it was going to use two branches.

  • @henriquealrs
    @henriquealrs 7 місяців тому +1

    Anybody else amazed by the fact Matt wrote the Fibonacci sequence in x86 and just knew the size of instructions

  • @scaredyfish
    @scaredyfish 7 місяців тому +2

    What I don’t quite understand, and this is perhaps because the metaphor breaks down, is what is the decoding robot actually doing? It takes a piece of information, and ‘decodes’ it into a different piece of information? But why is this information understood by the next robot where the original information wasn’t?
    I presume this has something to do with determining which physical circuitry actually executes the instruction, but I can’t really visualise how that happens.

    • @TheUglyGnome
      @TheUglyGnome 7 місяців тому +4

      Decoder can for example find out which bits of the instruction are memory address, register address, ALU operation code, etc. Then it forwards these bits to the right units of the processor. In some other processor implementation the decoder could just check the operation code and make a microcode jump to the microcode address handling this instruction.

    • @tomrollet154
      @tomrollet154 7 місяців тому +1

      Its hard to explain as in a modern design, it work a bit differently.
      But to make simple, the initial piece of infirmation is just a pack of 1 and 0. The branch predictor is going to predict if it's a branch, if it needs to be taken and where. It does not even have to read the 1 and 0s to know if the instruction is a branch. Everything is done base of previously seen branches using tables to track things.
      The latter decode stage, is used to transform this pack of bits into a serie of action to do. This is done to setup what needs to be done to execute the instruction. Ex for an addition: where to get the 2 values to add, where to put the result...

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +9

      It's a fair question. In older chips the decoding was often straightforward; often implemented as a kind of ROM called a PLA that mapped opcodes to sets of things to do, cycle by cycle.
      In modern CPUs like x86s, the instructions are very complex and difficult to decode, and they actually decide to a whole other sequence of smaller operations called "micro ops". Maybe if we get time we will go over that in one of these videos! There's complexity there too!

    • @kazedcat
      @kazedcat 7 місяців тому +4

      Instructions pack as much information into as few bits as possible. Decoders unpack this information. For simple cpu it does something like converting the binary coded add imstruction into an "on" signal to the execution hardware that performs the add operation. In modern CPU instructions are now very complex needing multiple steps to execute. So the decoder breaks down the complex instructions into multiple simpler instructions called microcode. It can also do the reverse fuse multiple instructions into one microcode.

  • @clehaxze
    @clehaxze 7 місяців тому +2

    I realized this is Godbolt!!!!

  • @Illogical.
    @Illogical. 29 днів тому

    I don't understand the example at the end. What was the difference between the before and after? I understand C and assembly if that helps explain it.

  • @hyperion6483
    @hyperion6483 7 місяців тому +1

    If we can decode that an instruction is a branch way ahead of the execution step that will decide to take it or not, isn't it possible to build a second pipeline in parallel as soon as we know that this instruction is a branch that could be taken, such that when we come to the execution step that will decide if we have to take it or not we only need to decide if we stay on the actual pipeline or switch to the second one we built in parallel ?

    • @ArneChristianRosenfeldt
      @ArneChristianRosenfeldt 7 місяців тому +1

      Yeah. Only way to restore a wrong prediction. Anything below this does more harm than benefit.
      Still don’t want to leak speculative LOAD and STORE to the outside. Memory mapped IO?

    • @DemonixTB
      @DemonixTB 7 місяців тому +3

      Yes, this is called speculative execution. Instead of taking one branch, the CPU executes both and discards the one it wasn't supposed to take, CPUs today have a this only happens when the CPU has no other work to do, which can be quite often when waiting for memory operations, or even when just waiting for the comparison instruction to finish which can take a while given how deeply pipelined the CPU is.

  • @SimGunther
    @SimGunther 7 місяців тому +2

    Wouldn't it be cool to submit in-memory programs to a RAM pipeline much like shader programs can be submitted to a GPU pipeline?
    That might be something we have to do to prevent spectre-like bugs by design.

    • @scaredyfish
      @scaredyfish 7 місяців тому

      Programmable branch prediction? The idea makes my head spin!

    • @paulsaulpaul
      @paulsaulpaul 7 місяців тому

      Unroll your loops?

  • @anata.one.1967
    @anata.one.1967 4 місяці тому

    What happens when the predictor makes the fetcher fetch both branches, if it it sees a branch in an address that is not in the table, does that speed up the processor??

  • @vadrif-draco
    @vadrif-draco 7 місяців тому +1

    Is that Ray Tracing video at the end soon to be released? Can't find it via search by name

  • @felixschwarz4699
    @felixschwarz4699 5 місяців тому

    Does that mean, the first iteration of any loop is a bit slower than the following iterations?

  • @R.B.
    @R.B. 7 місяців тому

    Two thoughts, when does it make sense to just add a couple more robots to the middle of the pipeline so that you have two pipelines in effect? In this way, you aren't flushing your cache ever, you are simply deciding which pipeline assembly line to continue processing, so you are throwing away some work, but it doesn't stall the process. Second, at what point will we start to see neural networks used for branch prediction? Seems like you could start using back propagation to apply weights for recognizing patterns for branch prediction.

    • @arghnews
      @arghnews 6 місяців тому

      AMD's cpu have used a neural network for a long time now for branch prediction as you suggest (google it)

  • @uttarandas
    @uttarandas 7 місяців тому

    Thanks, I needed this.

  • @pmmeurcatpics
    @pmmeurcatpics 7 місяців тому

    The part where the branch predictor increments/decrements the probability of each branch prediction reminded me of JITs, which too were covered recently on Computerphile. Do I understand correctly that this branch prediction adjustment too happens at runtime? Or could the program be dry-ran a couple of times during the compilation process to preconfigure the branch predictor somehow? It's a fascinating piece of technology either way:)

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +5

      The branch predictor is entirely live, based on the current run and history of the program. Some older intel chips did let compilers place some branch hints but they have been removed as ...to decode the hints you need to have already fetched and decided the instructions...by which time its probably too late:)

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +3

      But the ideas are similar, yes. Just even more micro-level than the tricks JITs pull

    • @pmmeurcatpics
      @pmmeurcatpics 7 місяців тому

      ​@@MattGodboltthank you for taking the time to answer! Have been loving the series:)

  • @Stdvwr
    @Stdvwr 7 місяців тому

    How do you go from 2 ifs to 1 if in 1 byte?

    • @mss664
      @mss664 7 місяців тому +1

      && and || operators will short circuit, which means that in expression "foo() && bar()", bar will be called only when foo returns true. Replacing them with bitwise & or | will unconditionally evaluate both sides, removing the branch.
      Compilers can sometimes optimize those for you, if the operations are cheap and evaluating the right-hand side won't affect the program's behavior. For example, a branch in (x > 0 && x < 10) can be optimized out, but a branch in (p != NULL && *p == 42) can't and shouldn't be, because dereferencing a null pointer would crash the program.

  • @prakharrai1090
    @prakharrai1090 7 місяців тому +2

    Wonderful!

  • @musmuk5350
    @musmuk5350 7 місяців тому

    Excellent video thank you

  • @Zenas521
    @Zenas521 7 місяців тому +1

    My take away:
    Branch Prediction: When I see this, I will give you that, noted.

  • @custard131
    @custard131 7 місяців тому

    i seem to have missed the original but this guy seems great at explaining CPU stuff
    any chance of a further video about how Spectre class of vulnerabilities fits into this? (my limited understanding is there are a few more things going on in between but that seems the extreme example of branch prediction going wrong)

  • @Roxor128
    @Roxor128 7 місяців тому

    NOP isn't strictly doing nothing, it does something that _changes_ nothing. On x86, NOP is equivalent to "XCHG AX, AX", which is just swapping register AX with itself. No change, but still doing something. 8 opcodes are used for instructions that swap one of the general-purpose registers with AX, one of which just happens to correspond to using AX as the nominated register, and which gets the name NOP instead of what it actually does.

  • @prettypic444
    @prettypic444 12 годин тому

    “Basement full of nonsense” would be a great band name

  • @tambourine_man
    @tambourine_man 7 місяців тому

    I wanna know about that black screen in the background showing followers, stock, etc. That looks like a cool project

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +1

      It's a Tidbyt showing some standard things plus some website stats

  • @kenjinks5465
    @kenjinks5465 7 місяців тому

    I recall simple memories like this used in artificial life in the 90s to find apples around trees...Animat? MIT Artificial Life publication

  • @OzeCovers
    @OzeCovers 7 місяців тому +1

    Couldn’t a neural network be implemented for this?
    Edit: Turns out it can be: neural branch prediction

    • @kazedcat
      @kazedcat 7 місяців тому +2

      Yes it can AMD are using perceptron as fast predictors for their ZEN processor. But the misprediction rates are high. So they are also supplementing it with a slower but more accurate predictor.

  • @moritzmayer9436
    @moritzmayer9436 7 місяців тому +1

    Pipelining is hard stuff, but very well explained. 😊

  • @aidanthompson5053
    @aidanthompson5053 7 місяців тому

    2:03

  • @photon2724
    @photon2724 7 місяців тому

    they have basically figured out how to make a machine learning Reinforcement-Learning prediction model in a SINGLE tick!

    • @ArneChristianRosenfeldt
      @ArneChristianRosenfeldt 7 місяців тому

      But probably this thing again is split up into 3 pipeline stages for some reason. Like, look at MIPS and tell me how register based instructions need more than 3 stages! MIPS says: LOAD needs exactly two cycles and two stages more. This is obviously not correct if cache is involved.

  • @deepak.rocks.
    @deepak.rocks. 7 місяців тому

    Nice 👍

  • @sophiamarchildon3998
    @sophiamarchildon3998 6 місяців тому

    11:00 That's currying.

  • @ivonakis
    @ivonakis 7 місяців тому

    Thank you - Its just a little less dark magic.

  • @Lion_McLionhead
    @Lion_McLionhead 7 місяців тому

    Figured they always simultaneously executed both branches until something wrote to memory or the branch was fully known.

    • @3rdalbum
      @3rdalbum 7 місяців тому

      Schrodinger's CPU

    • @MattGodbolt
      @MattGodbolt 7 місяців тому

      Given there's usually a branch every 4 to 6 instructions and the pipeline can be tens of instructions long, it quickly gets out of hand: each branch would bifurcate again and again...it's better (currently!) to guess and commit to the guess

  • @yuehuang3419
    @yuehuang3419 7 місяців тому

    It is just as like to be off the left, right.

  • @authentic6825
    @authentic6825 7 місяців тому

    Godbolt!

  • @todayonthebench
    @todayonthebench 7 місяців тому

    Branch prediction is a thing I have started considering as a bit of an old relic of its time.
    I suspect it will be gone in the near future, since it actually isn't useful in practice since the introduction of out of order execution.
    (I also feels that this comment is exceptionally short and only people who thoroughly studied out of order execution will catch my drift Just decode both sides, execution can't keep up with the decoder, so interleaving it for a few tens of cycles is meaningless as far as the instruction queue/buffer is concerned. One won't get bubbles during this process, and if one does, then lengthen the queue/buffer to give execution more scraps to mull over as the decoder gets ahead again.)
    Now, if one doesn't use out of order execution and have a lengthy pipeline, then yes, prediction is very useful. (unless one also cares about constant time, then prediction and out of order execution are both one's nemesis.)

    • @MattGodbolt
      @MattGodbolt 7 місяців тому +4

      But out of order execution pretty much relies 100% on accurate branch prediction! I hope to cover that (and indeed the reason I've done BP is to lay the groundwork for future videos that cover OoO)

  • @ryan-heath
    @ryan-heath 7 місяців тому +2

    I think I missed how it is known the prediction failed.
    Who is keeping tabs on the predictor? 😅

    • @thewhitefalcon8539
      @thewhitefalcon8539 7 місяців тому +6

      When the execution step at the end of the line processes the branch instruction properly, it compares the proper answer to the prediction. If they don't match then it pulls the horn and dumps the conveyor belt same as before.

    • @ryan-heath
      @ryan-heath 7 місяців тому +1

      @@thewhitefalcon8539 yes I have seen the vid but it still does not click.
      The predictor can give the wrong address to go to based on previous behavior. Is code being executed/evaluated before it is really executed? If you catch my drift 😅
      The first 100 times it predicted right. But the 101st its prediction wrong.
      Which address is being executed at that specific time?

    • @MNbenMN
      @MNbenMN 7 місяців тому +2

      ​@@ryan-heathI don;t think of it as executing an address directly ever, it's executing whatever is in the pipeline (presuming the pipeline is loaded correctly). The steps are abstracted so the CPU can proceed faster from the cached instructions in the pipeline, not pulling from an addressed memory location, which would take longer to pull than it does to execute, IIRC. It is the Jump instruction being executed that would reveal if the pipeline has loaded the correct prediction. In the infinite loop example, it can't be predicted wrong after 100 loops, so that example doesn't directly address that, but if it was a conditional branch operation, instead of an unconditional jump then it would be the execution of the conditional branch that reveals whether the prediction is correct

    • @ryan-heath
      @ryan-heath 7 місяців тому +1

      @@MNbenMN hmm I think I get what you are saying.
      So the cache contains the instruction and from which address it was load.
      The branch instruction can now check if the needed address is already loaded in the cache. If it is not the prediction was faulted.

    • @MNbenMN
      @MNbenMN 7 місяців тому +1

      @@ryan-heath That sounds about right for the extent of the explanation in this video, as far as I understand it. However, the modern implementations of branch prediction and caching are more sophisticated/complex with parallel threads, to the point of unintentionally introducing Spectre exploit vulnerabilities, and I am no expert on CPU architecture to the details on that level.

  • @zxuiji
    @zxuiji 7 місяців тому

    That ray trace thing is better done with a collision map though? You're already drawing every object into 3d space, just note the id of a triangle in a collision map for it and have the ray lookup the cells directly. There's no comparing of "is it to the right or left", it's just "What do I load here?" where the default id (0) just loads a value of no effect against the light.

  • @KipIngram
    @KipIngram 7 місяців тому

    I don't like your example - the predictive robot should be able to recognize an UNCONDITIONAL jump. I feel like that should be within the capabilities of a fetch unit. Unconditional calls as well. I understand calls raise some delicate issues, but after all, the fetch unit is the one that knows what the return address is going to be. The execute unit shouldn't have any awareness at all of where in memory the instructions its executing have come from. In a properly "clean" design that would mean that the fetch unit would "own" the return stack. Modern software strategies make that problematic - just one example of how we haven't followed our best possible path. We really shouldn't be mingling "fetch relevant" and "execution relevant" information in a single data structure.

  • @muhammadsiddiqui2244
    @muhammadsiddiqui2244 Місяць тому

    The sound of markers are "awesome"

  • @MoonCrab00
    @MoonCrab00 7 місяців тому

    If a human could read the matrix like Neo he would be the closest.

  • @bobhadababyitsaboy5765
    @bobhadababyitsaboy5765 2 місяці тому

    here after windows update AMD branch prediction optimizations

  • @BooleanDisorder
    @BooleanDisorder 7 місяців тому +3

    I'm interested in so many odd subjects. 😢

  • @saurabhjha8733
    @saurabhjha8733 2 місяці тому

    First robot has sharingan

  • @tiagotiagot
    @tiagotiagot 7 місяців тому

    I wonder how many years until this task is done by a built-in LLM-like predictor that is training in real time or one/few-shotting it....

    • @orlandomoreno6168
      @orlandomoreno6168 7 місяців тому

      LLM is overkill. You can embed a NN and do backpropagation / Hebb's rule in hardware.

    • @tiagotiagot
      @tiagotiagot 7 місяців тому

      @@orlandomoreno6168 Next-Token-Prediction seems like the perfect skill for this task; at the speed things have been progressing, it should be a matter of years at most before LLMs can predict CPU operations faster than CPUs can run natively. I forgot which one it was, but recently one of the normal LLMs trained on human language was shown to be able to learn machine code from in-context demonstrations and demonstrated the ability to replicate the behavior of a Turing machine; imagine what one trained specifically on CPU operations running on specialized ASIC might achieve in a few years.
      edit: I found it I think, it was Claude 3 Opus

  • @yagmur985
    @yagmur985 7 місяців тому +10

    Crypto Bull run is making waves everywhere and I have no idea on how it works. What is the best step to get started please,,

    • @OmarMoura-lr4sr
      @OmarMoura-lr4sr 7 місяців тому

      Am facing the same challenges right now and I made a lots of mistakes trying to do it on my own even this video doesn't give any guidelines

    • @brandonkim4554
      @brandonkim4554 7 місяців тому

      I will advise you to stop trading on your own if you continue to lose. I no longer negotiate alone, I have always needed help and assistance

    • @GiseleLuz-rm6vd
      @GiseleLuz-rm6vd 7 місяців тому

      You're right! The current market might give opportunities to maximize profit within a short term, but in order to execute such strategy, you must be a skilled practitioner.

    • @AbdRahmanAzhar
      @AbdRahmanAzhar 7 місяців тому

      Inspiring! Do you think you can give me some advice on how to invest like you do now?

    • @Abu-h4n
      @Abu-h4n 7 місяців тому

      If you are not in the financial market space right now, you are making a huge mistake. I understand that it could be due to ignorance, but if you want to make your money work for you...prevent inflation

  • @gamechannelminecraft6583
    @gamechannelminecraft6583 7 місяців тому +11

    "Congrats to everyone Who is early and who found this comment.. 🐼😊

  • @EnjoyCocaColaLight
    @EnjoyCocaColaLight 7 місяців тому

    first and shuddup yes i am

  • @The_Pariah
    @The_Pariah 7 місяців тому

    Not a great video...
    I gave on it before 1/2 way.
    I just don't like how this guy is trying to convey his messages.

  • @el_es
    @el_es 7 місяців тому

    @JamesSharman ;)