Binary to decimal can’t be that hard, right?

Поділитися
Вставка
  • Опубліковано 30 тра 2024
  • More 6502: eater.net/6502
    Support these videos on Patreon: / beneater or eater.net/support for other ways to support.
    0:00 Introduction
    2:09 Separating digits by dividing by 10
    3:24 Dividing numbers in binary by hand
    9:26 An algorithm for binary division
    19:06 Implementing the algorithm in 6502 assembly
    34:14 Running the program
    34:45 Reversing the digits
    41:38 It works!
    ------------------
    Social media:
    Website: www.eater.net
    Twitter: / ben_eater
    Patreon: / beneater
    Reddit: / beneater
    Special thanks to these supporters for making this video possible:
    Adrien Friggeri, Alexander Wendland, Andrew Vauter, Anson VanDoren, Armin Brauns, Ben Dyson, Ben Kamens, Ben Williams, Bill Cooksey, Binh Tran, Bouke Groenescheij, Bradley Pirtle, Bryan Brickman, Carlos Ambrozak, Christopher Blackmon, Cole Johnson, Daniel Jeppsson, Daniel Sackett, Daniel Tang, Dave Burley, Dave Walter, David Brown, David Clark, David House, David Sastre Medina, David Turnbull, David Turner, Dean Winger, Dmitry Guyvoronsky, Dušan Dželebdžić, Dzevad Trumic, Emilio Mendoza, Eric Brummer, Eric Busalacchi, Eric Dynowski, Eric Twilegar, Erik Broeders, Eugene Bulkin, fxshlein, George Miroshnykov, Harry McDow, HaykH, Hidde de Jong, Ian Tait, Ingo Eble, Ivan Sorokin, Jason DeStefano, Jason Specland, JavaXP, Jay Binks, Jayne Gabriele, Jeremy A., Jeremy Wise, Joe OConnor, Joe Pregracke, Joel Jakobsson, Joel Messerli, Joel Miller, Johannes Lundberg, John Fenwick, John Meade, Jon Dugan, Joshua King, Kefen, Kenneth Christensen, Kent Collins, Koreo, Lambda GPU Workstations, Larry, London Dobbs, Lucas Nestor, Lukasz Pacholik, Maksym Zavershynskyi, Marcus Classon, Martin Roth, Mats Fredriksson, Matt Alexander, Matthäus Pawelczyk, melvin2001, Michael Burke, Michael Garland, Michael Tedder, Michael Timbrook, Miguel Ríos, Mikel Lindsaar, Nicholas Moresco, Örn Arnarson, Örper Forilan, Paul Pluzhnikov, Paul Randal, Pete Dietl, Philip Hofstetter, Randy True, Ric King, Richard Wells, Rob Bruno, Robert Diaz, Ron Maxwell, sam raza, Sam Rose, Sergey Ten, SonOfSofaman, Stefan Nesinger, Stefanus Du Toit, Stephen Kelley, Stephen Riley, Stephen Smithstone, Steve Jones, Steve Gorman, Steven Pequeno, TheWebMachine, Tom Burns, Vlad Goran, Vladimir Kanazir, Warren Miller, xisente, Yusuke Saito

КОМЕНТАРІ • 1,4 тис.

  • @andrewjvaughan
    @andrewjvaughan 3 роки тому +2375

    “I was able to print that out pretty easily - well, it took 9 videos” - and this is why we love this channel

    • @minghaoliang4311
      @minghaoliang4311 3 роки тому +113

      It's like a mathematician say "it's trivia" to save a twenty-page long proof.

    • @urugulu1656
      @urugulu1656 3 роки тому +12

      and 9 months

    • @krzysztof-ws9og
      @krzysztof-ws9og 3 роки тому +22

      I also love vulkan for having to write 1000 lines of code to render single triangle
      Yeach, I know, you can write everything in a single line of C/C++ code

    • @theairaccumulator7144
      @theairaccumulator7144 3 роки тому +8

      @@krzysztof-ws9og well, who cares, people can make(and probably already have made) libraries for easier use. It's still fatser anyway.

    • @tberry7348
      @tberry7348 3 роки тому +18

      @@krzysztof-ws9og after these videos machine language makes more sense to me than C probably ever will..

  • @ChrisB...
    @ChrisB... 3 роки тому +1948

    Kudos to every programmer who solved these problems in assembly long before I started programming.

    • @WarrenGarabrandt
      @WarrenGarabrandt 3 роки тому +216

      In the real early days of computers, doing anything at all on them was a long series of hard problems that had to be solved one at a time. As we (humans in general) got better and better at it, we built up libraries of routines that we knew how to call.
      You want to know why Linux and related operating systems have so many small command line programs in them that you string together on the command line to accomplish a larger job? This is why. Every single task was viewed as a problem that needed a solution. You would write a program that did that one task as well as it could possible be done. Then when you have a big job requiring many different tasks to be strung together, you just call into all the little task programs, passing the inputs and outputs of each task in a big line. That kind of logic can be extended to solve a great many different kinds of jobs, all while using tested and time honored code to do small tasks, one at a time, as well as possible.
      As computers matured, and different philosophies were developed about how computers should work, new ways doing things were invented. Linux kept its model of small tasks strung together into big jobs. Other environments like Windows decided to go with a more user friendly approach.
      In the early days, if you wanted to write a book, you would use a program to type in text, and save it to a file. Then you can run a program on that file to output spelling errors (any word not in a dictionary). Then you run that into a formatting program to paginate, format, etc. your text, and save it as a new file on the computer. Then you can send that formatted file to the printer and it will physically print the document.
      On Windows, you open a program specially written to allow you to enter text, format it as you go, check for spelling errors on the fly, allow pagination and printing, all in one program. The downside? What happens if you want to do combine two documents into one? In Linux, you can just concatenate the text files right on the command line with a pre-written tool. In Windows, you have to hope your "does everything" program has this functionality built in, or use a different program to combine them.
      What is you want to print different kinds of files? In Linux, the same program that sends data to the printer can take any pre-formatted input and send it to the printer. No need to reinvent the wheel for printing, just write a utility to convert your existing file into a language the printer understands, and use the already existing print program to finish. In Windows, every program that needs a print function has to implement their own logic for how to handle this. Different programs can end up with different printing features because they implemented it differently.
      In more modern times, converting a file into something the printer can handle is usually accomplished by the printer driver. Your program just converts whatever it needs to print into effectively a big image, and then the print driver turns that into directions for the printer. This varies of course).
      I'm just scratching the surface of an enormous amount of complexity, of course, but hopefully you get a sense of how difficult those early days were, and how different philosophies dictated different platform designs.

    • @rogerfroud300
      @rogerfroud300 3 роки тому +125

      Things were just different back then. Life was simpler and programs less sophisticated. I once had to take the square root of a 96bit binary number which took a bit of research and then a couple of days to program in assembler. Great fun and very satisfying. We never wasted clock cycles because processing power was precious. These days it's squandered in the most appalling way.

    • @WarrenGarabrandt
      @WarrenGarabrandt 3 роки тому +36

      @@rogerfroud300 I think the largest waste of processing power we face today is the myriad of interpreted languages. Since nearly every web site used at least a little JavaScript, basically all of us are affected by it at least a little, but I'm thinking more along the lines of python used for heavy data manipulation, or PHP and all those myriad of interpreted web hosting languages running in data centers.
      Relatively few modern applications are actually compiled down to machine language. Most of them only compile to an intermediate language of some sort, like C# and VB to, and I think that includes Java (but I haven't used it personally). I wouldn't be surprised to find out that huge sections of our operating systems were written this way.
      Those extra clock cycles get turned directly into wasted energy and increased CO2 in our atmosphere. It's not a big deal performance-wise today because our modern computers are thousands of times faster and more efficient than anything we had back then. But still, it seems like a waste.
      As a C# programmer myself, I'm not going out of my way to change, because these levels of abstraction really help us get our work done and hold costs down.

    • @shinyhappyrem8728
      @shinyhappyrem8728 3 роки тому +6

      @@WarrenGarabrandt: JIT

    • @ChrisB...
      @ChrisB... 3 роки тому +36

      @ For the record, I started programming about 5 years years before I was first able to dial in to my university's VAX. I learned to program from books and magazines in the early 80's. What amazes me is the early programmers didn't have ANY references, just their brains and the basic tools the hardware provided.

  • @sobertillnoon
    @sobertillnoon 3 роки тому +834

    I was way too delighted when that second long division problem folded out from behind the first. I love your style of presenting.

    • @aserta
      @aserta 3 роки тому +10

      Me too. :)

    • @tberry7348
      @tberry7348 3 роки тому +3

      Me 3

    • @LeoStaley
      @LeoStaley 3 роки тому +4

      Me 2 mod 2

    • @yarik12341
      @yarik12341 3 роки тому

      when he's dividing 10001 he borrows from the first one. But all the consecutive 0s are 1? how? What am I confused about?

    • @DiThi
      @DiThi 3 роки тому

      @@yarik12341 That's only when it tries to substract 1010 from 1000 (not 10001), and since 1000 is smaller than 1010, it ends up with negative numbers. It's all 1 at the beginning because it rolls over, and incidentally that's how negative numbers are stored, by beginning by 1.

  • @spychicken19
    @spychicken19 3 роки тому +1663

    "Binary to decimal can't be that hard, right?" *sees video is 42 minutes long*

    • @misterhat5823
      @misterhat5823 3 роки тому +56

      It isn't that hard. It's harder to explain it so that someone will understand it.

    • @power-max
      @power-max 3 роки тому +91

      @@misterhat5823 programing machines = easy. Programing humans = hard

    • @circuitdesolator
      @circuitdesolator 3 роки тому +14

      Let's confuse the arduino kids these days 😁

    • @affapple3214
      @affapple3214 3 роки тому +18

      I just jumped directly onto the video without seeing how long it was. Just found out it was 42 minutes after 35 minutes. Pretty entertaining

    • @ggldmrd5583
      @ggldmrd5583 3 роки тому +2

      @@misterhat5823 fr.wikipedia.org/wiki/Double_dabble

  • @Hacker-at-Large
    @Hacker-at-Large 3 роки тому +527

    I started out on the 6502 almost 40 years ago. Beyond the nostalgia, it’s great to see, in the comments here, new generations experience the joy and wonder as I did so many years ago. Awesome job, Ben.

    • @unbeirrbarde
      @unbeirrbarde 3 роки тому +10

      Me too. I love this series. Have solved all the problems on my own in the 80s. Assembler was a great experience!

    • @shinyhappyrem8728
      @shinyhappyrem8728 3 роки тому +4

      Damn... I'm almost 40 years old, and never went into this stuff until now.

    • @KC9UDX
      @KC9UDX 3 роки тому +6

      6502 isn't just for nostalgia. It's incredibly elegant for what it is, and can be used for countless everyday tasks. Using more complicated systems for simple tasks is asking for trouble

    • @dogwalker666
      @dogwalker666 3 роки тому +3

      I taught my self on the Z80

    • @nontraditionaltech2073
      @nontraditionaltech2073 3 роки тому +11

      To me, this is the most magical part of technology. It’s not AI, Machine Learning or front end web development frameworks. There is no place I’d rather be than circuitry, processors and low level code :-)

  • @skaruts
    @skaruts 3 роки тому +75

    8:20 - When you keep leveling up your computer engineering skills, eventually you get the power to infinitely unfold paper with the next calculations already written on it. :)

  • @EvilSandwich
    @EvilSandwich 3 роки тому +217

    Man, I've been struggling how to do that without Decimal Mode for months. I'm working on a calculator program on the NES and have been running into that program again and again. Every time I looked up how to do it on 6502, they always said "Use Decimal Mode". But the NES's 2A03 doesn't have a decimal mode!
    I could hug you!

    • @joeymurphy2464
      @joeymurphy2464 3 роки тому +86

      Fun fact: the NES got away with using a 6502 without paying for the patent by doing away with decimal mode. Without decimal mode, the 2A03 no longer fell under the patent protection for 6502 so MOS couldn't stop them. The funny part? The silicon for decimal mode is still on the chip, just with the proper wires cut off to prevent you using it!

    • @flatfingertuning727
      @flatfingertuning727 3 роки тому +7

      To do it efficiently, I'd suggest focusing on a routine which, if A is in the range 0 to 9, will perform a div-mod operation with a value whose upper byte is in A and whose lower byte is fetched via (someZP),Y addressing mode, storing the quotient in (someZP),Y and the remainder in A. If after doing that one decrements Y and loops if it's non-negative, one can easily perform a div-mod by ten for any size of binary number.

    • @robertlozyniak3661
      @robertlozyniak3661 3 роки тому +2

      If you google "Jones on BCD arithmetic", you will get a tutorial on how to jerry-rig a binary machine to do decimal arithmetic.

    • @SteveJones172pilot
      @SteveJones172pilot 3 роки тому +10

      And this is why I think he did it this way, vs. using BCD as others suggested.. The concepts here go beyond the 6502. I "grew up" doing 6809 assembly, and was able to follow this, even if the opcodes are different to me. Explaining it with the simplified BCD mode makes it specific to only this processor, and you lose a lot of value IMHO.. THis video was awesome!

    • @telbee
      @telbee 3 роки тому +6

      @@flatfingertuning727 Yeah, there are lots of ways to do it. (Using tables can really speed things up!). However I like the way this video elegantly describes the concept, and it's quite impressive that the whole thing only uses a few bytes..

  • @MinecraftTestSquad
    @MinecraftTestSquad 3 роки тому +198

    9:00 you didn’t do the math wrong, a bit just got flipped while transmitting.

    • @tech6hutch
      @tech6hutch 3 роки тому +37

      It was the cosmic rays, man

    • @techleontius9161
      @techleontius9161 3 роки тому +16

      @@tech6hutch that's why you need error check, which he also made a tutorial about

    • @ropersonline
      @ropersonline 3 роки тому +4

      7:52 or 8:35, you mean.

    • @MarkMcDaniel
      @MarkMcDaniel 2 роки тому

      Noise in the system.

  • @tiger12506
    @tiger12506 3 роки тому +93

    I remember when this algorithm was first present in a computer architecture course I took. I was in awe of how they managed to take such a complex process and turn it into something so elegant. Your thorough explanation of it here is top notch, and if/when I need a refresher, I'll be sure to come back here!

    • @ggldmrd5583
      @ggldmrd5583 3 роки тому +1

      And what about Double Dabble algorithm ? It looks way better to me...fr.wikipedia.org/wiki/Double_dabble

    • @ryanhaart
      @ryanhaart 3 роки тому

      @@ggldmrd5583 I am not sure about that. First of all, how do you defined "better"? Shorter code or faster execution time? In terms of execution time, it of course all depends on the excact number of instructions and the number of cycles each instruction takes. Here is a high level analysis for a 16-bit number: the long division takes shift 16 loops per decimal digit found, so the worst case runtime is 16x5=80 loops. Double Dabble takes 16 shift loops plus 5 loops within each shift loop to check whether each power of 10 digit is >4 and then add 3 to it. So Double Dabble always takes 16x5 loops, whereas long division can finish in 16 loops for a number 4, add 3 (in the correct position) if necessary - sounds like a lot of instructions and cycles to me.

    • @ggldmrd5583
      @ggldmrd5583 3 роки тому

      @@ryanhaart Well, what I defined as "better" was the shortest one in terms of ROM usage (shorter code). To be honest i didn't really check, but i think that i don't take too much risks saying that double dabble implies shorter code, cause the algorithm for the division seems long enough to me for that sort of task, but i might say something false when i say that, cause i didn't check and it's just an intuition. If you prefer the one with the shortest execution time, then i think that it would be more interesting to determine the complexity at worst case depending on the size of the number to convert (instead of the complexity at worst case for a given number of digits). It's not difficult to determine it anyway, the quetion is to know wether it's linear, exponential, quadratic, ...depending on the size of the number.

  • @tiger12506
    @tiger12506 3 роки тому +169

    More complex ISA that actually have a div instruction _still_ do this algortihm underneath automatically in hardware. That's why if you check the datasheet for them, the div instruction takes ~16x cycles to complete than a normal instruction (for the appropriate word size)

    • @prophetgab
      @prophetgab 3 роки тому +48

      There is a huge literature on the design of divider circuits and they get much more complicated than bitwise long division. The tradeoffs to consider are mainly how often does a user need to divide integers vs the energy/space the architecture is willing to spend on it.

    • @argus456
      @argus456 3 роки тому +28

      @@prophetgab Most compilers are able to optimize away most divisions for this reason as well, replacing them with shifts, for example

    • @paulstelian97
      @paulstelian97 3 роки тому +3

      @@argus456 Modern CPUs still manage to do the div in one cycle for some reason.

    • @WheretIB2
      @WheretIB2 3 роки тому +30

      @@paulstelian97 What CPUs are those? Intel Coffee Lake (2019) takes 23-88 cycles for division and AMD Ryzen (2018) takes 13-47

    • @paulstelian97
      @paulstelian97 3 роки тому +6

      @@WheretIB2 Then I must have recalled wrongly?

  • @michaelcobb1024
    @michaelcobb1024 3 роки тому +9

    A few years ago in my first year at university I wrote a 6502 emulator for fun in C#. I only ever got around to testing each instruction individually before I got bored. Your 6502 videos have re-inspired me and now I want to test this program on my emulator!

  • @fir3w4lk3r
    @fir3w4lk3r 3 роки тому +43

    Can;t wait for the floating point number arithmetics series!

    • @jgharston
      @jgharston 2 роки тому +1

      Floating point is just shifts-and-adds but with logarithms. Now, who complained at school that they'd never need logs? :)

    • @puppergump4117
      @puppergump4117 2 роки тому

      @@jgharston I don't see what the problem is, obviously just do math on a normal number and throw a dot in between the digits.

    • @gorgikalamernikov3260
      @gorgikalamernikov3260 2 роки тому +2

      @@puppergump4117 how do your put a dot in a register that can only hold 1 or 0?

    • @puppergump4117
      @puppergump4117 2 роки тому

      @@gorgikalamernikov3260 Just reserve like 4 bits to decide the amount it moves from the center and 1 bit to decide if it's left or right. There's probably problems with it but my original comment was just sarcasm.

    • @gorgikalamernikov3260
      @gorgikalamernikov3260 2 роки тому

      @@puppergump4117 Yeah, that's pretty much how it's done actually, there's 8 reserved bits for a float32 to tell where the dot is... But nuance arises which is what the original commenter was excited about i guess.

  • @K-o-R
    @K-o-R 3 роки тому +51

    I think for me the hard part of binary maths is the cascading carry or borrow.
    The rotating of the number automatically pushing the answer into the "memory" spot was elegant as fuck. Nice.

  • @ghb323
    @ghb323 Рік тому +16

    Another method of extracting the digits is performing repeated subtraction (until it is the last number before going negative) left to right: 255 -> 155 -> 55, this takes 2 subtractions by 100 to get the hundreds digit, then 55 -> 45 -> 35 -> 25 -> 15 -> 5 which takes 5 subtractions by 10 to get the tens, and we now have the ones place which is already there. Its like counting bills but starting at the highest denomination without exceeding the price of a product.

    • @rodrigomeldola6538
      @rodrigomeldola6538 Рік тому +1

      Thank you, i made the algorithm to run in my "sap 8bit computer" and it's working just fine.

    • @arturpaivads
      @arturpaivads Рік тому

      Yeah, I got the same idea. I would love to know what are the advantages on each method.
      This method seem simpler. But I think that for bigger numbers you may do more iterations.
      The number of opcodes seem very similar. It would highly depend on the language. It doesnt use ROR but does need increment (to count the number of subtractions) and a conditional jump on borrow. And it looks like you use the same number of registers.

    • @StefanNoack
      @StefanNoack 6 місяців тому

      This method is much faster, because it uses at most 9 subtractions per digit, whereas the division method used as many as there are bits. Also you get the result in the right order. Only downside might me larger code size because you need all the powers of 10 as constants in ROM. For 32 bit numbers that would be an extra 36 bytes.

  • @rudge3speed
    @rudge3speed 3 роки тому +37

    6502 assembly is one of the last things I'll forget in old age. By that time I'll probably have too little RAM left to run any code.

    • @ggldmrd5583
      @ggldmrd5583 3 роки тому +1

      I always have a 6502 into my pocket when i get out of my home, who knows, it might be useful some day...

    • @AndyGraceMedia
      @AndyGraceMedia Рік тому

      Yes - I'm amazed I can still remember all the mnemonics and hex opcode values. I was only about 15 when I moved out of 6502 asm and into x86 and ARM2 asm.

  • @arsenic1987
    @arsenic1987 Рік тому +15

    Instead of reversing the string with code so it appears "correctly" ordered in RAM, why not just write the values into RAM as you go, and use the Y register to count how many results you get. Then when you're done, just decrement trough the Y index and print each character, thereby removing the need for a code to iterate trough the string to insert a character. Just a suggestion.

    • @penyaev
      @penyaev 4 місяці тому

      Alternatively, push values onto the stack, and then when you pull they'll come out in the reverse order which is exactly what you need. You'll need to know when to stop pulling from the stack, for that you can either use the Y register as suggested above, or just put a special marker value first, which would indicate that the sequence has ended. Given that the only possible values are 0..9, any other value will work. Maybe even better, use zero as a marker value, and put all values onto the stack shifted by one, just don't forget to decrement them back by one when you pull them from the stack.

  • @AntneeUK
    @AntneeUK 3 роки тому +385

    "So we're going to do this rotate left, right?"
    It's like talking to my dad
    "So we go left, right?"
    "Wait, left, then right?"
    "No, left, right?"
    "Right?"
    "No, left"
    "Left?"
    "Right!"
    🤯

    • @melkiorwiseman5234
      @melkiorwiseman5234 3 роки тому +18

      We're talking about insufficient semantic redundancy here. ;)

    • @krzysztof-ws9og
      @krzysztof-ws9og 3 роки тому +14

      At least in Polish there are separate words for "right" meaning "ok" and "right" meaning opposite of left

    • @codeman99-dev
      @codeman99-dev 3 роки тому +9

      @@krzysztof-ws9og There are words in English too. "We are going right, correct?"

    • @Ray-ej3jb
      @Ray-ej3jb 3 роки тому +11

      You forget he's American - English is not his first language

    • @mattiviljanen8109
      @mattiviljanen8109 3 роки тому +6

      "No, my other left!"

  • @MrLjupcekolev
    @MrLjupcekolev 3 роки тому +159

    this is the only guy that would make a whole iOS app just to solve an assembly problem

    • @recompile
      @recompile 3 роки тому +4

      That's not praiseworthy. He's dramatically over-complicated things. This video should have been around 10 minutes. At least I remember why I avoid this channel. Ugh.

    • @Grom1477
      @Grom1477 3 роки тому +70

      @@recompile You are pretty fun at parties, huh?

    • @7n7o
      @7n7o 3 роки тому +75

      @@recompile he goes over everything in great detail so everybody can understand it and it also gives a massively deeper understanding of what he's trying to teach

    • @gabpas1213
      @gabpas1213 3 роки тому +59

      @@recompile You sound like that edgy kid who thinks he's better than everyone else. He makes introductory videos on electronics, and is a good teacher. He's not like those channels who are like "So I coded everything off-camera, here's 500 lines of code, good luck understanding that crap". You must be in a happy relationship with that attitude.

    • @baronofclubs
      @baronofclubs 3 роки тому +41

      @@recompile It's been 4 months and I still don't see your 10 minute video on how to convert binary to decimal using bitwise operations.

  • @boredwithusernames
    @boredwithusernames 3 роки тому +25

    Ben, where were you back in 1982 when I was learning about binary math. This is some very elegant coding and apart form the print routine there is not one sign of a nasty little jmp instruction, everything is relative calls using bne or beq. You explanation of the math is very clear and my past teachers could learn a lot from your presentation style. Thanks for producing this video, I am sure that a lot of viewers who need to learn the concepts of binary math will benefit greatly from this video ;)

  • @micahgilbertcubing5911
    @micahgilbertcubing5911 3 роки тому +62

    This is gonna be a good one, I can tell. Glad you're back!

  • @csbruce
    @csbruce 3 роки тому +71

    There's a much easier way to convert binary numbers to decimal on the 6502. Essentially, you left-shift the value out of the input 16-bit word (MSB→LSB) and left-shift the carry flag from that into the output 16-bit word. Mathematically, this just copies the value bit-by-bit from one word to another. Except that instead of using ROL:ROL to rotate the bit into the output word, you ADC the value in the output word to itself plus the carry from the first word, and instead of doing a normal binary addition, you do a Decimal-Mode addition (SED). The result will be in Binary Coded Decimal instead of binary, which is easy to convert into ASCII, and the output word will need to be three bytes in size to hold results over 9999.

    • @vaendryl
      @vaendryl 3 роки тому +14

      I didn't understand that but still enjoyed reading it

    • @MichaelStrautz
      @MichaelStrautz 3 роки тому +9

      I really hope @eaterbc sees this
      that was impressive sir ... might be a vid for much later in the series as Ben has not even TOUCHED Decimal mode nor 60% of the ASM calls on the 6502
      but I'd love to see Ben keep going into a 6502 master class type of series .. Not that making vid for YT is easy ... Just wish it was a bit faster
      This is one of my little joys each month so far :D

    • @cjay2
      @cjay2 3 роки тому +27

      Good plan, but perhaps Ben did it the long way to demonstrate the basic method, instead of using the decimal feature on the 6502, which hides/avoids the basics.

    • @csbruce
      @csbruce 3 роки тому +1

      @@cjay2: I'm not sure that implementing long division is teaching "basic" 6502 programming.

    • @cjay2
      @cjay2 3 роки тому +17

      @@csbruce No, Ben's teaching basic methodology, using the 6502 to do it. Optimal 6502 programming would be an entirely other thing.

  • @miege90
    @miege90 3 роки тому +43

    For dividing by 10 you actually don't need two bytes for mod10, as the remainder will always fit into one byte =)

    • @PewnyPL
      @PewnyPL 2 роки тому +2

      Yes, but it's used so both value and mod10 are the same length, otherwise the algorithm wouldn't work correctly.

    • @NatHsu11
      @NatHsu11 Рік тому +6

      @@PewnyPL Actually the algorithm works fine without the second byte for mod10. The carry bit is set correctly after the first sbc #10 from the low byte and nothing is done with the high byte after the sbc #0.

    • @edwardpaulsen1074
      @edwardpaulsen1074 Рік тому

      That is fine if you only ever want to divide by 10... which is great for specifically decimal... but stops working if you need to divide by anything over 255... like say 1000? I know the first argument will be that we are not going that high however, the next steps in decimal is hundreds and then thousands, working the opposite direction and jumping to thousandths is where we enter the realms of engineering where that type of precision becomes a huge necessity. The other variation of that is dealing with fractions where it can get a bit strange as well.

  • @CarthagoMike
    @CarthagoMike 3 роки тому +32

    *_Binary to decimal can't be that hard, right?_*
    Me: *There is probably a little twist to it*
    _sees video length_
    me: *Oh yes, there definitely is a twist to it*

  • @anere5326
    @anere5326 Рік тому +7

    respect for who made high level programming language for us

  • @g0z3
    @g0z3 3 роки тому +170

    Using Ramanujan's number... I see...

    • @kenoobe
      @kenoobe 3 роки тому +5

      Haha yeah. This deserves more likes.

    • @assifmirza130
      @assifmirza130 3 роки тому +1

      Very interesting :)

    • @paulrautenbach
      @paulrautenbach 3 роки тому +16

      And the first example was 42 - The Answer to the Ultimate Question of Life, the Universe, and Everything (in The Hitchhiker's Guide to the Galaxy by Douglas Adams).

    • @MatthewChaplain
      @MatthewChaplain 3 роки тому +3

      Subtle cross-over episode with 3b1b, I think ;)

    • @chromosundrift
      @chromosundrift 3 роки тому +5

      huh. And I thought this was just a boring number.

  • @jakec9441
    @jakec9441 3 роки тому +41

    The joy I'm feeling being able to witness a demonstration of how a computer does math at the machine level is indescribable! (At least somewhere between machine level and assembly ;p ) I am a visual learner who once seeing a demonstration can picture what a written document is describing. Sort of like needing a Rosetta Stone to understand the syntax of a text. This video series has welcomed me back to an electronics hobby I abandoned some 25 years ago as a teen. Thank you for posting these!

    • @eddieh7962
      @eddieh7962 3 роки тому +3

      UA-cam is an amazing thing for visual learners like us!

    • @sino-atrial_node
      @sino-atrial_node 2 роки тому

      ua-cam.com/video/rhgwIhB58PA/v-deo.html

  • @sagnikdas975
    @sagnikdas975 3 роки тому +16

    Being a software developer really like your videos for the insights they provide how things work under the hood. Would have been really helpful if u had a playlist for microprocessors and assembly language in general. I can vouch a lot of software folks would be having a blast if you had such a playlist and then would find a video like this much more understandable and enticing.

  • @fun3306603
    @fun3306603 3 роки тому +42

    Got so excited when this popped up in my notifications!!

  • @codythomashunsberger
    @codythomashunsberger 3 роки тому +27

    I have a degree in IT so I have a pretty good understanding of how to USE computers and some of the basics on how they work, but this channel is awesome because it really makes the ultra low-level stuff click to explain how they work on a physical level. Never ceases to fascinate me, I really appreciate the effort that goes into sharing this stuff.

    • @DarthZackTheFirstI
      @DarthZackTheFirstI 3 роки тому +6

      so you have a degree in how to use a keyboard? *g*

    • @electronichaircut8801
      @electronichaircut8801 3 роки тому +2

      @@DarthZackTheFirstI and mouse and monitor

    • @codegeek98
      @codegeek98 6 місяців тому

      Yeah, it's always disturbing to use the thing and think "man, I have no idea how they built this"; any crumb of extra knowledge is cool

  • @menotu000
    @menotu000 Рік тому +6

    Ben, thank you. I have been coding in ASM (6502) for YEARS... I always had trouble converting from binary to decimal (understanding it). You explained it so that my non-technical wife would get it. That is quite a feat.

  • @0cgw
    @0cgw Рік тому +24

    Back in the 80s, I'd always try to avoid writing a division algorithm because of the time it takes to run and the time to debug. For printing a 16 bit number, to avoid a division algorithm, I would count the number of times I need to subtract 10000, add back 10000 at the end of the loop, then repeat the process with 1000, 100 and 10 (at the last step you are left with the units). You'll get the digits out in the order we want (with leading zeros unless you suppress them). It all depends on how much effort you want to go to, and how efficient you want your code.

    • @HiHi-ur3on
      @HiHi-ur3on Рік тому +4

      ​@Mickey Farley It's not a general division algorithm is what they meant. It's more specialised for a 5 digit base 10 number, where as eater's algorithm can work for any base and any length just by changing the sub 10 to a different base and adding more bytes to the division "register".

    • @tr1p1ea
      @tr1p1ea Рік тому

      This is also a common method.

  • @ace4x3
    @ace4x3 Рік тому +13

    Very good video! You have no idea how good you explain stuff. I have literally never seen anything related to assembly other than the fact that its a very low level machine language and I was able to follow along!
    The visual representation of the algorithm was excellent. It shows that you really have a passion for teaching which I wholefully appreciate as a comsci student!
    Thank you for making this. I will definitely look into your other stuff :D

  • @monchytales6857
    @monchytales6857 Рік тому +3

    your videos have been a great help in writing my own 6502 emulator

  • @IslandHermit
    @IslandHermit 3 роки тому +35

    A simpler approach would be to push a zero onto the stack at the start, then have the outer loop push each digit of the result onto the stack. At the end you just pop the digits off the stack and print them until you hit that initial zero.

    • @silaspoulson9935
      @silaspoulson9935 3 роки тому +1

      Indeed! Would be interesting to know why stack wasn't used - perhaps Ben just didn't consider it?

    • @IslandHermit
      @IslandHermit 3 роки тому +5

      @@silaspoulson9935 It could be that he's looking ahead to a library of string processing functions which wouldn't work very well with a result on the stack. The stack-based algorithm could always copy the popped result into memory and still be more efficient, but I can see that once you've started down the road of passing strings around in memory it could be difficult to spot those situations where the stack can simplify things.

    • @IslandHermit
      @IslandHermit 3 роки тому +11

      Also, I should note that a stack based option only works here because we know the result will be quite small, no more than 6 bytes. It's not an appropriate option for more general string handling, particularly on the 6502 with it's tiny stack.

    • @banderfargoyl
      @banderfargoyl 3 роки тому +1

      Or he might have used the message buffer as a stack by appending each digit to the end of the string concluding with the null byte.

    • @pv2b
      @pv2b 3 роки тому +1

      I had the exact same idea but you beat me to posting a comment. :-)

  • @StefanNoack
    @StefanNoack 3 роки тому +62

    19:42: "There are not enough CPU registers to store all of this data!"
    me: *laughs in Z80*

    • @talideon
      @talideon 3 роки тому +10

      6502: *shrugs smugly and points at all those sweet, sweet addressing modes, especially zero page addressing*

    • @melkiorwiseman5234
      @melkiorwiseman5234 3 роки тому +10

      @@talideon Z80: *Grins knowingly and points at all those extra clock cycles to access the memory*
      (Edit: I agree that the 6502 has some cool memory addressing modes. I love the PCR address instructions which allow a program to be made completely relocatable)

    • @circuitdesolator
      @circuitdesolator 3 роки тому

      Arduino (inserted): watdaheck they are talking about 🤔

    • @wesleymays1931
      @wesleymays1931 3 роки тому +5

      me: *laughs in modern x86 processor with one register capable of storing both values, most likely already has an instruction to do this, can run at a billion instructions per second*

    • @TheDarkness344
      @TheDarkness344 3 роки тому +5

      Me laughing in my dual core redstone computer with like 1 general purpose register per core lol

  • @ashwanishahrawat4607
    @ashwanishahrawat4607 3 роки тому +3

    Thanks for not skipping things, it really helps to understand the CPU cycles it's gonna take. I have whole new level of respect for long division method now.

  • @joacortez3423
    @joacortez3423 3 роки тому +25

    I just wanted to say thank for explaining a filed that seems so intimidating in a such understandable way. You helped decide what I'm going to do in university

    • @igornoga5362
      @igornoga5362 3 роки тому +1

      If you understand this video you should have no problems with CS courses for couple of semesters. I recommend watching linear algebra and calculus series on 3blue1brown channel and that's basicly first year done :D

    • @ggldmrd5583
      @ggldmrd5583 3 роки тому +1

      @@igornoga5362 What you say is completely true. Damn there are lots of Ben Eater subscribers that are also 3Blue1Brown subs lol, and im also part of them.

  • @n2n8sda
    @n2n8sda 3 роки тому +13

    I'm envious of your 6502 computer! It knows the meaning of life. Great video BTW, about half way through now but had to stop and write a comment before I forget. I am very familiar with everything you covered already but I really enjoy a refresher and just to listen to your explanation of things.

  • @DantalionNl
    @DantalionNl 3 роки тому +60

    Remember when the decimal to binary conversion took up the majority of the mainframe? People sure managed to find some really clever solutions to the problem especially given the constraints of that time.

    • @forksandpopsticles9183
      @forksandpopsticles9183 3 роки тому +1

      Hold up, when was this?

    • @aserta
      @aserta 3 роки тому +1

      To me, that's easily the most impressive part. Our ability to work around problem when the stones are set in place.

    • @DantalionNl
      @DantalionNl 3 роки тому +4

      @@forksandpopsticles9183 1960, IBM 1401, www.righto.com/2015/10/qui-binary-arithmetic-how-1960s-ibm.html

    • @mrmimeisfunny
      @mrmimeisfunny 3 роки тому

      @@var67 I think you can set a flag and then ADC and SBC turn into BCD instructions.

    • @flatfingertuning727
      @flatfingertuning727 3 роки тому +4

      If speed isn't important, the hardware required to perform integer binary to decimal conversion can be amazingly compact. To convert an N bit binary number to a D digit decimal number in NxD steps requires counters that can count from 0 to D and 0 to N, along with an Nx1 shift register for the input and a Dx4 shift register for the output, and a little bit of circuitry which could bit in a 20x5-bit ROM and a 1-bit register, or a modest number of gates.

  • @marc.lepage
    @marc.lepage 2 роки тому +8

    Alternatives for reversing the string of digits:
    - push to head of string in memory, shuffling remaining bytes along string (as shown in video)
    - just use stack directly instead of memory, to store digits until output (risks stack overflow, but not in this simple case)
    - push digits onto end of string in memory, but then output string from end to start (so reverse as output)
    - push first digit to end of string in memory and build it backwards in memory, then output it forwards from start of string
    - just output digits to LCD from right to left using cursor control, as seen on a calculator (probably the easiest)

    • @ownagery2
      @ownagery2 Рік тому

      The last one defies the purpose tho

  • @DoctorMikeReddy
    @DoctorMikeReddy 3 роки тому +47

    And why not,for this example, just push characters to the stack after a 0 then when done just pull and print until you hit that 0 terminator? Or just store to an incremented memory location, then read off backwards with a decrement loop, avoiding the stack all together?

    • @gargaj
      @gargaj 3 роки тому +9

      ...or just print it as it is but scroll the display and reset the cursor :D

    • @krallja
      @krallja 3 роки тому +1

      gargaj I think the display module even has a right-to-left mode, it’ll reverse the string for you!

    • @nadie9058
      @nadie9058 3 роки тому +18

      This videos are not ment for "for this example" solutions, they're ment for the solutions that teaches more

    • @enochliu8316
      @enochliu8316 3 роки тому +1

      That is a great idea.

    • @thewebmachine
      @thewebmachine 3 роки тому +13

      Yeah, there are a handful of ways to handle it. Me thinks Ben chooses the paths he takes to show as much of the underlying process as possible. When we're "getting back to basics" like this, it's important to show how we get from A to...well, X and Y. haha But yeah, there are more cycle efficient ways to tackle some spots, but he intends to illustrate how the CPU arrives at its solutions. It's up to the student, as an exercise in critical thinking, to come up with different ways to arrive at the same answer. It's the best way to learn and way too many people want the answer just handed to them. Educators like Ben are becoming increasingly rare, as the demand for shortcuts has led many to cave to pressure and not dive as deeply into the ever important 'why.'
      For example, if we used all stack for storage along the way, we risk reducing available stack space that might be needed elsewhere in our program. Sure, it makes not a hill of beans in such simple code like this, but it definitely could in a much larger/complex program, like a game that might need to do a LOT of division all at once. You could easoly burn through the stack and lose your pointers.

  • @alklein4660
    @alklein4660 3 роки тому +3

    Maybe, after going through all these videos, some people of the "Oh, you can make just one little change, how long can it take?" variety will realize that even the slightest change can involve "little" unanticipated things that make almost any change to hardware or software not a "little change".
    Thanks for the videos - after 47 years of system development, it brought back some memories (my first EPROM programmer was a 6502 and peripheral chips - then I added RAM and ported CP/M to the 6502).

  • @claudioscola
    @claudioscola 3 роки тому +2

    Nearly 30 years I learnt electronics including most of the digital stuff going on in this channel. I did stuff on Z80, 6502, 8085 and 8086; I used Karnaugh maps for logic chips anf the 555 timer. I don't do this anymore but this channel is such a great trip down memory lane. Thanks so much for it.

  • @lemon3rd800
    @lemon3rd800 3 роки тому +1

    This is why I've subscribed to you long ago: Step-by-step instructions, simple but thorough explanations and no funny background music. Simply brilliant!

  • @munzeralseed
    @munzeralseed 3 роки тому +51

    It's always the best idea to start a video with the almighty answer to the meaning of life; 42

    • @paulmichaelfreedman8334
      @paulmichaelfreedman8334 3 роки тому +1

      Error - Towel missing

    • @hiankun
      @hiankun 3 роки тому

      And it's binary is 101010... Another good reason to support 42. :-p

    • @vdubjunkie
      @vdubjunkie 3 роки тому +1

      Spooky. When I read your comment, you had precisely 42 thumb's up! :|

    • @UnknownVir
      @UnknownVir 3 роки тому

      Two people need to unlike this comment (current value is 44)

  • @gregclare
    @gregclare 3 роки тому +3

    Brilliant stuff! Highly educational for all those new to understanding binary arithmetic and machine coding. For the rest of us old-school coders, I think it’s the first time I’ve ever seen someone fully demonstrate long-hand binary division. Certainly the first time I’ve seen someone who appears to have also written an iPad App in order to help demonstrate it? Awesome work Ben! 😊

  • @smrtfasizmu6161
    @smrtfasizmu6161 2 роки тому +1

    I learned this alhorithm in school as well as many others for dealing with binary numbers in a computer friendly way, but our professor of computer architecture never explained to us why these algorithms are correct. We were just asked to know how to perform these algorithms without knowing where they come from and why they are true. Thank you for this video.

  • @nontraditionaltech2073
    @nontraditionaltech2073 3 роки тому +4

    Thank you Ben and thank you to all the wicked-smart people contributing comments here! I’ve really enjoyed reading them and learning from ppls input and stories. This is pure gold!

  • @MrGeorge1896
    @MrGeorge1896 3 роки тому +10

    When I did this in the 80s I used a table with the binary representation of decimal values 10000, 1000, 100, 10 and 1. (Talking about 16 bits unsigned numbers here)
    Then I compared the number first with 10000 and if it was greater/equal subtracted it until it was smaller than 10000. Counting the subtractions I got the first (most significant) decimal digit. Same for 1000, 100, 10 and 1 to get all five decimal digits.

    • @Shorthouse061
      @Shorthouse061 3 роки тому

      I challenged myself to create a solution before watching this video and this is exactly what I came up with too. But instead of doing comparisons I just kept subtracting and checking the carry flag. When the carry shows an overflow (i.e I've subtracted too much) I add one lot back on and then continue to the next decimal digit.
      Interested to watch the video now and see how Ben solves it.

    • @ovalteen4404
      @ovalteen4404 3 роки тому

      I think it would be fun to create a BCD doubler circuit, then use it to double each digit and add in the next highest bit from the original number, until the entire set has been processed. A generalized BCD adder with carry in/out is pretty complex, but restricting it to doubling the current digit greatly simplifies it and makes it more practical to create a pure discrete circuit to perform the conversion.

    • @possible-realities
      @possible-realities 2 роки тому

      @@ovalteen4404 I think that's what the Double Dabble algorithm does.

    • @possible-realities
      @possible-realities 2 роки тому +1

      You could do it in even fewer steps by trying to subtract in turn
      40000, 20000, 10000 for the leftmost digit,
      8000, 4000, 2000, 1000 for the next,
      800, 400, 200, 100 for the next,
      80, 40, 20, 10 for the next,
      8, 4, 2, 1 for the last
      You still only need a lookup table for 40000, 8000, 800, 80, 8; you can get the rest by shifting those right.
      This would mean 19 trial subtractions, compared to up to 16*5 = 80 with the method in the video. Of course, the method in the video goes faster if the result has fewer digits. If you want that, you can start with e.g. comparing if the input is < 10, 100, 1000, 10000, to see how many decimal digits the result should have.

    • @ovalteen4404
      @ovalteen4404 2 роки тому +1

      @@possible-realities Looks like it, mostly. My idea was to add 6 on carry rather than add 3 before shift on >=5.
      And an alternative noted in the article I read, a small ROM could be turned into a lookup table to calculate the double with carry.

  • @amansaxena5898
    @amansaxena5898 3 роки тому +96

    The message could have been accumulated in reverse and then reversed back in one go using the stack

    • @dhyanais
      @dhyanais 3 роки тому +6

      Thats what I thought

    • @urugulu1656
      @urugulu1656 3 роки тому +3

      do you actually need a stack for that

    • @ShanyGolan
      @ShanyGolan 3 роки тому +1

      Pop. Pop that gem 😁

    • @PieroUlloa
      @PieroUlloa 3 роки тому +29

      Since we never used the stack on the conversion problem, at the start of the subroutine we could have stored a null character in the stack, called the function and instead of printing, we could have pushed the ascii to the stack, and then, when the subroutine of bin2dec conversion ends, we just instead of using index register, we could just popped values from the stack to the A register, and printed them until we read a null.

    • @misterhat5823
      @misterhat5823 3 роки тому +26

      Stack space is often limited. Many micros don't have a user accessible stack. You shouldn't get used to just throwing everything on the stack.

  • @useless.production
    @useless.production Рік тому +2

    "If you're envious of my extremely capable computer and its ability to display numbers..." This made me laugh 😂

  • @lonelyelk
    @lonelyelk 3 роки тому +9

    Hi Ben! Nice work! You don't need the second byte for your mod10 though. Unless you want to divide by anything that is greater than one byte, it does nothing.

  • @edwardj.r340
    @edwardj.r340 3 роки тому +20

    Wait when did you get over 500k subs?? I wasn't aware of it. Good on ya mate!!

  • @realcundo
    @realcundo Рік тому +4

    Brings some memories (albeit on Z80)! Another, way is to just keep subtracting 10000, then 1000, then 100, and 10 and keep counting how many times we could subtract those values without underflowing.

  • @stephanebessette6471
    @stephanebessette6471 3 роки тому +1

    Now I finally understand why my teachers (way way way back) used to say that divisions were very costly. Thanks for this video, very informative.

  • @martinherbert699
    @martinherbert699 3 роки тому +1

    So pleased to see another video from you Ben!

  • @byronwatkins2565
    @byronwatkins2565 3 роки тому +3

    I would have put the null termination in the last location (message+5) and added characters at message+4, message+3, etc. This would have required a soft pointer, but the 6502's zero page addressing would have made this simple. Also, each time a decimal digit is determined, three more most significant bits are guaranteed to be zero in the dividend and we can use that fact to time-optimize the code. For 16-bits this is not entirely essential, but for 128 or 256 bits, it becomes quite noticeable. These are more advanced methods for viewers to consider and are not intended as criticisms.

  • @johnekare8376
    @johnekare8376 3 роки тому +8

    I just finished the first part of the clock module for the 8-bit computer. This is extra curriculum... btw, register manipulation seems a little bit like playing Towers of Hanoi.

  • @canuckprogressive.3435
    @canuckprogressive.3435 2 роки тому +1

    Thanks! I found a few 16 bit divide routines and wanted to expand them to handle 32 bits. I had no luck. The clear walk thru of this 16 bit divide routine gave me the deeper understanding I needed to easily make it into 32 bits.

  • @chillnsd1482
    @chillnsd1482 3 роки тому +1

    I love how much effort you put into your videos and kits.

  • @bob-ny6kn
    @bob-ny6kn 3 роки тому +5

    I remember my first victory in learning to code was when I needed to display the decimal value of an accumulated number. Computer programming pushed me into the unknown, and it was fun... and hard.

  • @HansLemurson
    @HansLemurson 2 роки тому +3

    There's another way to convert a binary number into Binary Coded Decimal that doesn't use division at all: Shift the number bit by bit into a zeroed register, and in that register, look at the bits in groups of 4. Any time a 4-bit group reaches or exceeds 5, then you add 3 to it. Just keep shifting and applying the conditional adding to the groups until the number has been fully shifted out of the old register. And now you're done! View the number as Hexadecimal, and it will look just like the Decimal value.
    It's basically a clever way of making the digits roll over every Ten instead of every Sixteen. Normally when a 5 gets shifted (doubled) it would just become an A. But if you add 3 before the shift, it becomes an 8 that when shifted becomes a 10, behaving just like decimal numbers should. It's really quite clever!
    I've done it in hardware before, but never tried a software implementation.

  • @BGroothedde
    @BGroothedde 3 роки тому +1

    I appreciate the amount of effort you put into these videos Ben, thank you. Another great video, which makes me appreciate old school computing even more.

  • @daveturner5305
    @daveturner5305 3 роки тому +1

    Man, an assembler! 40 years ago I had to hand code in octal or hex. Does this take me back! Keep it up. back then memory was precious and any byte or processor cycle saved was important. This is why I'm a fan of the current plethora of small microprocessors. Back to basics and efficiency.

  • @thewebmachine
    @thewebmachine 3 роки тому +21

    I had a 'moment' about 13 minutes in..."Where the hell have I seen this before!?" Then, in the last 2 seconds of the video, it all became clear. I'm a relatively new Patreon supporter and saw the draft version a while back. 🤦‍♂️Seeing my name in the credits was my...um...clue. 🤣 Wow...glad I can still surprise MYSELF on occasion. 🙄

  • @silasxaviertheprod.9734
    @silasxaviertheprod.9734 3 роки тому +8

    I thought of an easier way to do this, it’s not a new method and you probably mentioned it later on in the video, but I did come
    Up with it on my own when I was designing a calculator in Minecraft.
    Here’s what I did.
    Step 1: I used bcd of course to represent numbers 1 through 9. (I later gave zero an arbitrary value of 10 in binary and gave the calculator rules on what to do when I type a zero) If you are wondering how I tackled number order entry when typing higher digit numbers like (124), I used a sort of 1 bit-snake that uses locking repeaters. It will take a small second of input and save it to a repeater. I had 4 lines of snakes like that (one for each bit) and they were all linked together in order to move at once even when a Line didn’t receive a 1 bit signal thus representing off bits (0s). This allowed me to enter four digit binary numbers one after the other and they would just slide over to the left when a new number was written. So I could type 1-2-4 on a keypad, it would go through an encoder and then be represented as 1, 01, 100 in binary.
    As for the displaying the numbers on screen as decimal, I had made 3 copies of logic gate systems that decode the binary input for each digit into a single output representing the values (1-10) and then had those value lines linked to another set of 3 decoders linked to 7- segment display faces that would take the 1-10 signals and divide each one across seven segments.so if the value line representing 5 was triggered for a digit. The digits decoder would read the 5 and follow the instruction I built for which segments to light up
    The last part was just to add the logic gates for addition, and us3 a switch to reverse them for subtraction, and then Wire it up to only show the answers after I’ve entered both my numbers and pressed calculate. Then the answer will just be wired back into the same bcd to decimal decoders to show it on screen
    Btw the addition logic for this is not as simple as using just an adder since I have it where there are 3 different adders, one for each bcd calculation. You have to make Rules that include carrying a 1 to the next bcd adder when a calculation is larger than 9 and then the answers from the adders needs to go into subtracters that subtract 10 when you get a number higher than 9. Then you have your true answer

    • @alexp6013
      @alexp6013 3 роки тому +4

      It works, but i'm kind of afraid you're wasting about 5 values / hex number, which, for larger applications, can sum up to a lot (you also lose a lot of nifty advantages of computing in bare binary)

    • @silasxaviertheprod.9734
      @silasxaviertheprod.9734 3 роки тому +1

      @@alexp6013 I actually agree, I ran into a few problems with my design that made it inaccurate sometimes, so I tried fixing it many times but ended up scrapping the idea. I was hoping it would be simpler than using bare binary but it turns out it required more just to do what bare binary could. Only reason I liked it was because it was easy to convert to decimal. I still haven’t quite figured out how to make a decimal converter that can read and show an 11 digit binary number

    • @FlatBroke612
      @FlatBroke612 Рік тому

      @@silasxaviertheprod.9734 there’s a joke somewhere here about showing your mother an 11 digit binary number...

  • @robertscott501
    @robertscott501 3 роки тому +1

    Ben, this is an excellent channel. I am aspiring to learn about electronics, and I spend a lot of time browsing videos on the topic- your explanations are clear, you include examples of the relevant circuits, and you provide schematics for those circuits. Few channels tick all of those boxes, please keep up the good work. It' blows my mind how many channels that claim to be about learning about electronics and do not bother with a basic schematic, with the expectation that I reverse engineer that schematic by looking at parts on a breadboard.

  • @vdubjunkie
    @vdubjunkie 3 роки тому +1

    Others have mentioned it, but I wanted to say just how thankful I am that you take the time. It took me a very long time to find "the right" presentation on how to solve the various cube puzzles and I have definitely found the right person to help me understand circuits and assembly language. It really is all about finding the presentation that works well for you.

  • @r_k_rishabh
    @r_k_rishabh 3 роки тому +4

    Bro🙏🙏, your videos are awesome, You are one of the greatest teachers of all time and mark my words - if you continue to make these videos regularly you will become the greatest online educator of all time...
    Besides, I have few suggestions for the future videos :
    1)Make a team and upload good videos faster
    2)Make an app to scale this online education on to next level
    3)Make videos on all the topics related to electronics, for example, Radio communication (I desperately want to know about them more), oscillators, fm and am, gpu, dc to ac transformation, explaining how typical circuits such as fm receivers, camera and so on....
    And one thing more there are millions of students around the world who needs teachers like you. So, make this teaching your passion, profession or business (whatever you call it) but be there always for us...
    Finally, I want to say that you're a legend bro and we all love you...
    So, keep up the good work, bro...
    Love from India❤️

  • @barmetler
    @barmetler 3 роки тому +4

    38:25 I think it would be better to move 5th to 6th, then 5th to 6th, ... 1st to 2nd, and then store the character from the stack to the 1st spot. That way, in each of the 5 iterations, there is one load and one store, and no stack interaction. Basically start from the right and move to the left.

  • @cuteswan
    @cuteswan 3 роки тому +1

    Your videos are fantastic. Thanks for sharing them with everyone.

  • @JLMoriart
    @JLMoriart 3 роки тому +2

    This stuff is so cool, thank you for making videos like this!

  • @martinh4982
    @martinh4982 3 роки тому +5

    I am so glad that people much smarter than me invented programming languages where you can just use "/" to divide numbers. Very useful to know what's going on behind to scenes, but holy-moly, that's just madness!

    • @thorham1346
      @thorham1346 3 роки тому

      More modern CPUs have division instructions.

    • @Mnnvint
      @Mnnvint 3 роки тому +1

      @@thorham1346 They don't even have to be more modern, even the 8088 had a divide instruction... it was apparently slower than doing it this way, though!

    • @thorham1346
      @thorham1346 3 роки тому

      @@Mnnvint True. 68000 has them also.

  • @KennethSorling
    @KennethSorling Рік тому +4

    I never knew the humble 6502 wasn't capable of doing division or multiplication innately. Although thinking it over for a second makes you realize that a little bit of memory is required, something which the processor could assume absolutely nothing about.
    Does this mean that machines like the Commodore 64 had to implement these algorithms in the kernal (sic!)? Or was it deferred even to the BASIC ROM?

  • @greenstonegecko
    @greenstonegecko 3 роки тому

    Dude I recently discovered this channel... I've always wondered what the LOGIC is behind all the stuff in computers. Seems like nobody is able to properly explain it to me!
    Then FINALLY after months of searching I see a vid "sums in binary" and this dude just explains it all with logic gates!! YEESSS FINALLY!!
    I'm hooked on this channel! Definitely subbed!

  • @Martin5599
    @Martin5599 3 роки тому +1

    This is just best channel... I could understand deep technical things I thought I never will... thank you very much Ben..

  • @Jabbl
    @Jabbl 3 роки тому +7

    Do you plan on showing how floating point numbers work?
    Are they using any of the base functions of a processor, or are they abstracted to another layer?

    • @agentdark64
      @agentdark64 Рік тому

      In the end of the day, the cpu only needs to check if one floating point number is higher than or less than or equal to another floating point value. Just like 00000001 in binary might mean the number 1 in decimal, in floating point 00000001 might represent "0.000001" or it's "lowest unit" and the comparisons would be the same way the cpu checks if one number is higher than the other in binary like how it does it with decimal.

  • @rikschaaf
    @rikschaaf 3 роки тому +5

    First of all, since you never subtract more than 10, I think you can do this with 3 bytes, instead of 4. Secondly, since numbers are ofter right-aligned, you could create an altered version of the print_char subroutine that decrements the counter instead of adding to it. That saves you from having to write the string to memory before printing it.
    Finally I'd like to say: great video! :)

    • @ratgr
      @ratgr 3 роки тому +1

      On a good implementation you would do a division by 2 (a rotate) and then a division by 5 instead of by 10

    • @ratgr
      @ratgr 3 роки тому +1

      @@kallewirsch2263 The thing is there is a fast by 5 division algorithm, I don't remember the name but there is a fast divider algorithm for all primes below 7 and I say this because I don't remember if there was one for 11, but will not be surprised if the list is longer.
      And if you only require the mod part there are even fast modulus algorithms for all merssene primes

  • @ollyk22
    @ollyk22 3 роки тому +2

    This is why I love programming in assembly. You have to work out what is going on within the micro itself, and so gives you a good insight into the whole process!

  • @UltraAceCombat
    @UltraAceCombat 3 роки тому +2

    I can't believe you made this video when you did. I was just wondering how I was going to do this if I ever got back into electronics personally, and couldn't find a way to frame the question in a way that a search would help. Always love your videos and their uncanny ability to pop up shortly after I have an idea.

  • @abdu1998a
    @abdu1998a 3 роки тому +11

    There are 3 points I want to make:
    1) The left 16 bit was unnecessary I think. We only used first 8 bit of it.
    2) There is a more efficient algorithm to convert binary to binary coded decimal(BCD) that shifts left each time and adds 3 if a BCD digit is more than or equal to 5. An example run is 4 bit binary 1111 which is 15 needs 2 digit BCD so 8 bits
    0000 0000 | 1111
    0000 0001 | 1110
    0000 0011 | 1100
    0000 0111 | 1000
    0000 1010 | 1000 (since 0111 = 7 >= 5 we added 3 to this number before the shift happens)
    0001 0101 | 0000
    its the end we have 1 and 5 which are the digits we wanted. Shift only by the number of bits you have at your initial inputs. To know more about this algorithm check out double dabble
    3) You could have used a stack structure to invert bits. Push all of them to stack and pop all of them at the end . But you can not use pha and pla since return from subroutine will read the return address from the stack and if you push something it tries to return to another address. You have to implement your own stack in the RAM.

    • @unbeirrbarde
      @unbeirrbarde 3 роки тому

      Love it. I thought the same, but didn't know this clever approach up-to-now. Thanks

    • @Mark-px3rq
      @Mark-px3rq 3 роки тому +3

      I don’t think you want to skip out learning a generic integer divide algorithm, and jump straight to the bcd edge case. And the stack is available here if you want to use it, no need to try emulating your own.

    • @carelx7029
      @carelx7029 3 роки тому

      Watched this video and compared it with my AVR assembly code for 40-bit conversion. The same and different. But that is how it goes with code. You find the algorithm, convert it in assembly, make it work, after that is has an input and an output. Now I remember I did double dabble. Nice refresher. Up to the next thing I forgot.

    • @SimonClarkstone
      @SimonClarkstone 3 роки тому

      The double dabble is quite clever. It is effectively the "double" operation on an extra-wide register that is binary in the bottom and BCD on the top, repeated the exact number of times required to shift all the bits from the bottom register into the top one. It works well in hardware too as the dabbling step is parallelizable.
      I presume it can run backwards to convert BCD to binary too, again parallelizable

    • @cigmorfil4101
      @cigmorfil4101 3 роки тому

      Doesn't the adjust have to be made to every BCD digit?
      Consider an 8 bit number: 1101 1110 ($DE) to BCD; it requires 3 BCD digits as the result (packed into 2 bytes):
      0000 0000 0000 | 1101 1110
      0000 0000 0001 | 1011 110-
      0000 0000 0011 | 0111 10--
      0000 0000 0110 | 1111 0---
      0000 0000 1001 | 1111 0---
      0000 0001 0011 | 1110 ----
      0000 0010 0111 | 110- ----
      0000 0010 1010 | 110- -----
      0000 0101 0101 | 10-- -----
      0000 0101 1000 | 10-- -----
      0000 1011 0001 | 0--- ----- [oops]
      0001 0110 0010 = 162
      Adjusting all BCD digits:
      ...
      0000 0101 0101 | 10-- -----
      0000 1000 1000 | 10-- -----
      0001 0001 0001 | 0--- -----
      0010 0010 0010 = 222 as it should be.
      The check needs to be made to every BCD digit which means two checks for each byte containing a packed BCD which rather wrecks the efficiency of doing the Decimal Adjust before the shift?

  • @samoconnor3633
    @samoconnor3633 Рік тому +5

    For uni I had to build a program in assembly to convert a decimal number to a nibble. You also have to factor in the ASCII value that needs to be converted 🤧

  • @braselectron
    @braselectron 3 роки тому +1

    Dear Ben Eater, just would like to say that your channel is fantastic! The tutorials are very instructive and I love to see them because it brings back memories, my days using a Apple ][ and programing in assembly language. Thank you for your great job. Good days I had in the 80's. I still have the Apple manuals, books, a few addon boards, DOS and ProDOS 5 1/4" floppy diskettes.

  • @aaronjamt
    @aaronjamt 3 роки тому +2

    The funy thing is, I watched the whole video when it came out (have you subscribed and turned on notifications yet?) and then, just 3 days later, found myself going back through the video, trying to use it in a project. Just gotta do a little manual 6502 -> Z80 ASM conversion, but shouldn't be too bad. Love your videos!

  • @_Funtime60
    @_Funtime60 3 роки тому +3

    I was bored, so I went and implemented this in java since I don't have a 6502. I had to remake subtraction as well since java doesn't have anything like an overflow/underflow flag. I had to remake addition too so I could use of 2's complement subtraction. While I was at it I made multiplication too, all without +, ++, -, or --.

  • @RudyGuillan
    @RudyGuillan 3 роки тому +3

    I think you could have made it a bit simpler by having "mod10" be just 1 byte. If you're dividing by 10, the remainder will always fit in 8 bits. When you're substracting 10, you don't need the highest byte, it will always be "0 - 0". So, you'd only need 3 bytes of RAM, and the substraction process would be simpler because you don't need to be fiddling with the Y register. Love your videos by the way! This is super nitpicky, I know :)

    • @MissNorington
      @MissNorington 6 місяців тому

      I need to divide by 12 for my project, and I might also use just 3 bytes. But still, wish there were a trick to do faster division and mod

  • @guybellerby8298
    @guybellerby8298 3 роки тому +1

    Another great video ... always well prepared and informative

  • @NDjuggle
    @NDjuggle 3 роки тому +1

    Math was never my thing but man, seeing it used in this way is really fascinating. And just seeing all the stuff that makes computers tick on such a deep, fundamental level is so eye opening. After watching your videos I can really appreciate how powerful computers have become. And it just makes me think how much ingenuity it took to create these modern computers when this is what it takes just to convert binary to decimal. Amazing.

  • @WhateverTechComestoMind
    @WhateverTechComestoMind 3 роки тому +7

    Another amazing video from the Bob Ross of electronics, looks like I have more work to do on my still unfinished 6502!

  • @guygrotke8059
    @guygrotke8059 Рік тому +2

    This could be done with MUCH less code by using ADC in decimal mode (or no ADC) depending on the bits in the binary number you are converting. The result is a BCD number which can easily be converted to ASCII by ORing each nibble with 0x30.

  • @bluerizlagirl
    @bluerizlagirl 3 роки тому +1

    I'm working on a 6502-based project which has to display decimal numbers. I already had a general-purpose division subroutine, so I used that to divide by 10, taking the remainder each time until the answer is 0 and storing the remainders + 48 (to get the ASCII code for the digit character) in memory; then displaying these ASCII codes in reverse order.

  • @hipsterbaby
    @hipsterbaby 3 роки тому +2

    I'm always amazed by how perfectly prepared he is!

  • @okuno54
    @okuno54 3 роки тому +5

    Wow, I was really expecting you to initialize `message` to all-nul characters and have an extra piece of memory that points to the start of the message-so-far. That way you wouldn't have to shove all the characters down every time. I'm having a hard time describing it, so here's some C (I'm not familiar enough with the 6502 ISA):
    char *message = 0x0204 ; 6 bytes
    char *msgStart = 0x02A0 ; 1 byte
    ...
    *msgStart = 5;
    message[*msgStart] = 0;
    ...
    void push_char(char c) {
    *msgStart--;
    message[*msgStart] = c;
    }
    with appropriate changes to `print`

    • @riccardoorlando2262
      @riccardoorlando2262 3 роки тому

      That only works if you know the length of the final message, but if you convert a short number then *message ends up having a lot of null bytes in the beginning, instead of the actual message.
      That, or I misread your C. It's been a couple of years since I last wrote C.

    • @nejaahalcyon
      @nejaahalcyon 3 роки тому

      @@var67 yes but that still assumes that the message length is bound. His way of handling it doesn't really have a set size limit. Even though he considers it to be 6 bytes max it could go much longer
      .

  • @finnaustin4002
    @finnaustin4002 3 роки тому +12

    Last time I was this early computer scientists were still scornful of assembly language

    • @alakani
      @alakani 3 роки тому +6

      Real programmers use butterflies!

  • @gooball2005
    @gooball2005 3 роки тому +1

    Great explanation and visualization as always!

  • @deltakid0
    @deltakid0 3 роки тому +2

    The topics you choose for making videos are just perfect, Division makes sense since CPU is able to do just addition, substraction and multiplication. I imagine the next video could be Binary Coded Decimal Operations or just Floating/Fixed Point Decimal Operations since computers are used to do financial operations too. Thank you.

    • @eDoc2020
      @eDoc2020 3 роки тому

      6502 doesn't have hardware multiply. I imagine software multiply will be coming up soon.

  • @Iroh72
    @Iroh72 3 роки тому +4

    Oh printf() my dear old friend, I see you in a different light now...

    • @amogus7
      @amogus7 3 роки тому

      you talking to null, use printf without a call

  • @canaDavid1
    @canaDavid1 3 роки тому +6

    How does this method compare to the double dabble algorithm?

    • @ggldmrd5583
      @ggldmrd5583 3 роки тому

      Cool, i see that im not the only one to speak about this algorithm in comments. I think Double Dabble is easier to program in practise cause it only needs two operations (shift and adding 3 if the 4 less significant bits < 5). I built a sort of CPU using basic components, I think I'll try to program Double Dabble then compile it for my CPU. Anyway, i personally prefer Double Dabble and this video didn't change my opinion on it.

  • @Stoneman06660
    @Stoneman06660 3 роки тому +1

    Another thoroughly enjoyable and interesting video.
    I've got a whole pile of components; really need to see if I can get something like this up and running. Fascinating stuff.

  • @sensiblewheels
    @sensiblewheels 3 роки тому +1

    Looks like the simulation app at 16:00 was also written by you for this video. Quality shows! Wonderful video, as always.