A CPU With Just One Instruction!!!

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 939

  • @SyBernot
    @SyBernot 3 роки тому +282

    I bought a CPU on ebay that had only one instruction implemented. I believe it was Halt and Catch Fire.

  •  5 років тому +419

    I've seen a talk where a guy built a compiler which compiled C to just mov instructions. He called it the movfuscator.

    • @suvetar
      @suvetar 5 років тому +4

      I'd love to see a link for that if you (and Gary of course!) don't mind? Thanks!

    • @fss1704
      @fss1704 5 років тому +5

      suvetar just search movfuscator, there's a black hat presentation on it. Adiabatic computing is also interesting.

    •  5 років тому +10

      @@suvetar This is the talk (he mentions it there) ua-cam.com/video/HlUe0TUHOIc/v-deo.html

    • @suvetar
      @suvetar 5 років тому

      @@fss1704 Wow - thank you! That was incredible!

    • @fss1704
      @fss1704 5 років тому

      @@suvetar ua-cam.com/video/F_Riqjdh2oM/v-deo.html
      ua-cam.com/video/vFpNNrt-baE/v-deo.html
      Listen

  • @Viewpoint314
    @Viewpoint314 3 роки тому +64

    I learned this 50 years ago in a computer class. The instruction was subtract and store.

    • @arm-power
      @arm-power 2 роки тому

      Address for A, B and branch address => 3 addresses total => single address is usually 42-bit (for 4 TB RAM) => 126-bit encoding for single instruction
      For comparison RISC use 16-bit (ATMEL MCUs, ARM Thumb) or 32-bit encoding (every high-performance RISC including new ARMv9)
      For comparison CISC x86 uses variable encoding with range from 8-bit to 120-bit (with avg instr length about 30-bit).
      So that's why this single instruction set nobody uses - it needs 4x times more memory just for encoding single instruction (so it's no go for MCU with tiny prog space), and it needs 2 instructions and one housekeeping instr = 3 instruction total => 12x more memory total in compare to standard 32-bit RISC.
      BTW you can add vector computing (or multiple other instructions) to be A, B, BranchAddr, VecA, VecB, etc. You can add whole x86 instruction set into one single mega instruction and get full modern PC functionality this way. Just it will result into several kilobit instruction length.
      So this is good mental exercising what you should avoid when designing good instruction set (or extension). Sad how only a few people realized this.

  • @0xZ0F
    @0xZ0F 5 років тому +395

    MOV is also Turing-complete.

    • @johnrickard8512
      @johnrickard8512 5 років тому +48

      Given the basic assumption that you have addition defined in memory and/or an adder memory mapped to the CPU along with a way to jump(many ways to do this) that is certainly possible.

    • @platin2148
      @platin2148 5 років тому +42

      Xoreaxeax movuscater...

    • @fss1704
      @fss1704 5 років тому +12

      Platin 21 brainfuck is way more ins'teresting...

    • @webgpu
      @webgpu 5 років тому +11

      @@platin2148 XOR EA, EA (zero out) MOV (value assignment to memory), OK - but you're still missing one important instruction: branch

    •  5 років тому +38

      @@webgpu that has been proved by Stephen Dolan
      in Computer Laboratory, University of Cambridge www.cl.cam.ac.uk/~sd601/papers/mov.pdf
      and there's a compiler named movfuscator that compiles valid C code into only MOVs (or only XOR/SUB/ADD/XADD/ADC/SBB/AND/OR/PUSH/POP/1-bit shifts/CMPXCHG/XCHG) github.com/xoreaxeaxeax/movfuscator

  • @uncaboat2399
    @uncaboat2399 3 роки тому +39

    I'm reminded of an old saying that, every piece of software has inefficiencies, meaning some instructions can be taken out without changing the functionality. And every piece of software has bugs in it, bits and parts that don't function as required. So by inference one can conclude that every piece of software can ultimately be reduced to a single instruction that doesn't work.

    • @SamualN
      @SamualN 3 роки тому +4

      "anything can be implemented in hardware" is a short version of what you said
      although true, it's not practical

  • @AexisRai
    @AexisRai 5 років тому +347

    The instruction set is bloat
    -Luke Smith, probably

    • @manimax3
      @manimax3 5 років тому +23

      He's probably already in the forest talking about it.

    • @surelock3221
      @surelock3221 5 років тому +12

      "Instruction sets are fuckin' gay bruh" - Albert Hitler

    • @JuusoAlasuutari
      @JuusoAlasuutari 5 років тому +17

      The world needs a HISC, i.e., half instruction set computer.

    • @AndreasDelleske
      @AndreasDelleske 5 років тому +10

      Juuso Alasuutari you mean semicomputers.

    • @JTGaut17
      @JTGaut17 3 роки тому

      - Linus

  • @gordonlawrence4749
    @gordonlawrence4749 5 років тому +40

    The earliest 6800 chips (not 68000 they were way later) were sort of 1 instruction in that if you executed it, it would never execute another instruction again. The assembler mnemonic was HCF . Halt and Catch Fire (and some really sarcastic developer really did put that in their assembler software). Basically there was a screw up where it set one accumulator, reset the other, and then dumped them both simultaneously onto the internal data bus. All in one clock cycle too. Quite efficient.

    • @toboterxp8155
      @toboterxp8155 3 роки тому +4

      The 6502 also had a few HCF instructions, that would activate a debug functionality that just cycled the address bus through all possible values repeatedly. And the Pentium had its famous HCF instruction, that locked the data bus and then requested an interrupt handle, wich of course would never arrive.

  • @MonkeyPunchZPoker
    @MonkeyPunchZPoker 5 років тому +251

    "You had ONE instruction"

    • @larrybeckham6652
      @larrybeckham6652 5 років тому +6

      Less transistors, made as fast as possible, use less heat, more reliable and more working chip per wafers, less cost. Or more cores per chip.

    • @Syndikalisten
      @Syndikalisten 4 роки тому +11

      @@larrybeckham6652 but far more complicated programming of compilers...

    • @larrybeckham6652
      @larrybeckham6652 4 роки тому +4

      @@Syndikalisten True. But get the compilers working and you have future savings that pays off.

    • @TheDoh007
      @TheDoh007 3 роки тому +14

      @@larrybeckham6652 Except it needs way more instructions to do the same things as a normal CPU, so effectively it's slower

    • @larrybeckham6652
      @larrybeckham6652 3 роки тому +1

      @@TheDoh007 life is of tradeoffs

  • @DunnickFayuro
    @DunnickFayuro 3 роки тому +98

    This brings me back down memory lane, where I was writting code *on paper* for educational purposes. I took computer classes and for like a whole semester we didn't even touched a computer!

    • @guym6093
      @guym6093 3 роки тому +16

      In the olden days you programed punch cards. CPU time was more expensive than human brain power. You triple checked your work! Often the code worked the first time. Otherwise it was a huge waste of money. My uncle use to program an old DEC VAX. I saw it when I was a kid I am 56 now. LOL

    • @scene2much
      @scene2much 3 роки тому +10

      @@guym6093 For Reference, I'm 62. DEC were cool a cool computing Universe, until the PC changed the game.
      My most primitive experience was writing pseudo-code, converting to assembler, then using the reference manual to convert to machine code, then punch the machine code onto papertape, then give a command to the operating-system-less single board 8086 to load the paper-tape to an address and run it. Debug-Rinse-Repeat.
      Thus I became a firmware engineer.
      Thank goodness it wasn't a single instruction CPU, and that it wasn't snowing and uphill from my dorm to the lab.

    • @raconvid6521
      @raconvid6521 3 роки тому +4

      I remember the good old days of making mine-craft computers. I also wrote binary code *on paper*.

  • @ianworthington2324
    @ianworthington2324 3 роки тому +14

    I remember many years ago someone saying you could implement a cpu with just a subtract instruction. Thanks for finally completing my knowledge! I can now die happy :)

  • @yorgle
    @yorgle 3 роки тому +43

    Next we need to build this CPU with just NAND gates. :D

    • @axelkoster
      @axelkoster 3 роки тому +6

      yes, like the GIGATRON

  • @BrokebackBob
    @BrokebackBob 5 років тому +171

    The 1 instruction is HALT.

    • @GaryExplains
      @GaryExplains  5 років тому +31

      LOL, no. 🙄

    • @mrpuneet1
      @mrpuneet1 5 років тому +1

      HALT or HLT

    • @fss1704
      @fss1704 5 років тому +5

      all you really need is a way to execute a lambda function....

    • @fss1704
      @fss1704 5 років тому

      @@GaryExplains Why qed exists then? it uses the original universe bootstrap, everyting does.

    • @NomoregoodnamesD8
      @NomoregoodnamesD8 5 років тому +7

      HCF

  • @AdvanRossum
    @AdvanRossum 3 роки тому +19

    One instruction with a complex inner instruction set ;)

    • @GaryExplains
      @GaryExplains  3 роки тому +6

      But still one instruction that needs no op codes because the same thing happens every time.

  • @mexykanu
    @mexykanu 5 років тому +41

    Machine code: Wtf
    "On the other hand there is an assembler"...
    Assembler: Wtf

  • @WerIstWieJesus
    @WerIstWieJesus 5 років тому +55

    Aristotle writes in his physics that we only need three principles to describe the whole world. Here we have one solution how.

    • @JamilKhan-hk1wl
      @JamilKhan-hk1wl 3 роки тому +3

      It is doing 3 things in 1 instruction, loading the memory, subtract one from another, and moves to another address

    • @WerIstWieJesus
      @WerIstWieJesus 3 роки тому

      @@JamilKhan-hk1wl Yes! I thought the same thing when I heard about the one-instruction-code.

    • @JamilKhan-hk1wl
      @JamilKhan-hk1wl 3 роки тому +1

      @@WerIstWieJesus its like the knights move in chess, it looks weird (L shape) but that one move can get it to all squares

    • @WerIstWieJesus
      @WerIstWieJesus 3 роки тому

      @@JamilKhan-hk1wl You are a great philosopher.

  • @derekkonigsberg2047
    @derekkonigsberg2047 5 років тому +19

    A friend in college actually built a one-instruction computer (not sure if it completely worked) as an electronics project that he did somewhere around late high school (I think). He described that one instruction as "subtract, and branch if negative." Somehow that sounds similar to what I think "SUBLEQ" stands for.

    • @ExEBoss
      @ExEBoss 5 років тому +1

      That would be SUBLT (subtract and branch if less than)

    • @kephalopod3054
      @kephalopod3054 4 роки тому +1

      Apart from SUBLEQ, SUBGEQ, SUBLT, SUBGT, what other single instruction can we have?

    • @kephalopod3054
      @kephalopod3054 4 роки тому +1

      And SUBCMP with two jumping addresses (one for LT, one for GT).

    • @warrenarnold
      @warrenarnold 3 роки тому

      @@kephalopod3054 subLGBT opcodes are gay😅

  • @THE16THPHANTOM
    @THE16THPHANTOM 5 років тому +67

    on the other hand: there is a glimpse of how emulators are created.

    • @RussellTeapot
      @RussellTeapot 5 років тому +2

      What do you mean?

    • @kilrahvp
      @kilrahvp 5 років тому +20

      @@RussellTeapot "Creating equivalent code using different instructions" is what an emulator does and is similar to what's done here..

    • @RussellTeapot
      @RussellTeapot 5 років тому

      @@kilrahvp oh, I see

    • @greenaum
      @greenaum 5 років тому +2

      Only in so far as it's describing machine code, where emulators take the target machine's machine code and read it as data.

  • @stevemaurer8120
    @stevemaurer8120 5 років тому +54

    This only works because the instruction itself is doing multiple things at the same time: loading from two different addresses, storing into another, and branching. It's kind of like saying "this knife can open wine bottles and also provide light" and learning that it's a Swiss Army knife with a bottle opener and a mini-flashlight as part of its base.

    • @leexabyz
      @leexabyz 3 роки тому +12

      I kinda agree, but SUBLEQ isn't an unusual instruction. There are CPU's that have it as part of their instruction set.
      Sorry for necropost

    • @jondo7680
      @jondo7680 3 роки тому +23

      Your comparison doesn't fit. This instruction does always the same thing because it's just a single instruction. With your Swiss army knife you choose the right tool to do the job, this instructions does always the same thing. All inputs are treated equally. Not more, not less. You don't press on the flashlight button if you open a wine bottle. The Swiss army knife is an example for a cpu that haves different instructions for different situations. This example is more like a hammer, you can use it to nail, to hammer the screw into a wall, to stop the housebreaker, to test the knee of the patient or to brake the glass in an emergency it's always the same instruction: hit x with strength y.

    • @pennyandrews3292
      @pennyandrews3292 3 роки тому +4

      @@jondo7680 Even a hammer generally has a claw on the back you can use to remove nails as well as pound them in. It's one tool that can be used in more than one way.

    • @DinoDiniProductions
      @DinoDiniProductions 3 роки тому

      nope

    • @DunnickFayuro
      @DunnickFayuro 3 роки тому +1

      @@leexabyz Well, as long as you get "necro-readers" like me, it's fine ;P

  • @jawtheshark
    @jawtheshark 5 років тому +93

    ... and now build this CPU with only NAND gates ;-)

    • @FunWithBits
      @FunWithBits 3 роки тому +10

      Silly me - I kind of did that but with xor gates. ua-cam.com/video/ISuV82p2vck/v-deo.html

    • @styleisaweapon
      @styleisaweapon 3 роки тому +12

      generally its NOR that is used when only one logic operation will be implemented - not sure why - the earliest integrated cpu designs frequently used just one kind of transistor for the bulk of the die area. Might have something to do with the cost of early NOR gate lithography vs NAND.

    • @FunWithBits
      @FunWithBits 3 роки тому +9

      ​@@styleisaweapon - good catch - I meant nor and not xor. (nor and nand are the two universal gates that can produce any other gate)

    • @drmosfet
      @drmosfet 3 роки тому +1

      It's nice to know that my memory is not playing tricks on me by remembering this useless fact correctly, thanks for the confirmation.🤪

    • @mandarbamane4268
      @mandarbamane4268 3 роки тому

      @@styleisaweapon I guess it's because to have better logic 0 level. Earlier they didn't use CMOS, but NMOS. In NMOS NOR gates, the ground voltage is just 1 transistor away from output, that's why.
      Now CMOS is used, NAND is better because stacking of PMOS (in NOR) would slow down the performance.

  • @suvetar
    @suvetar 5 років тому +10

    I, for one, would love to see more detail about the Assembler, pretty please!

  • @cheaterman49
    @cheaterman49 5 років тому +13

    SUBLEQ is Turing-complete. Awesome!

    • @skilz8098
      @skilz8098 3 роки тому +1

      I tend to think that an arbitrary unit vector is Turing complete... How and why? Any and all unit vectors have all branches of mathematics embedded within its properties. From a unit vector you can construct the unit circle! Once you have the Unit Circle you have algebra, geometry, trigonometry, and calculus... Even the simplest of all mathematical equations 1+1 = 2 satisfies the Pythagorean Theorem... Simply because the equation of the Unit Circle fixed at the origin (0,0) is a single form or specific set to the Pythagorean Theorem. From the Unit Vector you can construct any other value, digit or number, every linear equation and all polynomials, every trigonometric function, and all geometrical shapes from angles between two vectors with infinite area to a triangle all the way to a full circle. Once you have your polynomials, you can then also define your derivatives and integrals thus giving you calculus. Just follow the properties of vector arithmetic... Also vector arithmetic can be defined as a subset of Lambda Calculus which in itself has already been declared as Turing Complete thus making the Unit Vector Turing Complete. Now, as for how many instructions, that's dependent on the end user for which operations - linear and or affine transformations they chose to apply to the unit vector.

  • @zzasdfwas
    @zzasdfwas 5 років тому +23

    Well, you could just consider a group of many instructions to be one instruction where the opcode is considered an argument to the one instruction. Of course, it wouldn't be a very simple instruction.

    • @arm-power
      @arm-power 2 роки тому

      Exactly dude.
      Address for A, B and branch address => 3 addresses total => single address is usually 42-bit (for 4 TB RAM) => 126-bit encoding for single instruction.
      For comparison RISC use 16-bit (ATMEL MCUs, ARM Thumb) or 32-bit encoding (every high-performance RISC including new AArch64 ARMv9)
      For comparison CISC x86 uses variable encoding with range from 8-bit to 120-bit (with avg instr length about 30-bit).
      So that's why this single instruction set nobody uses - it needs 4x times more memory just for encoding single instruction (so it's no go for MCU with tiny prog space), and it needs 2 instructions and one housekeeping instr = 3 instruction total => 12x more memory total in compare to standard 32-bit RISC.
      BTW you can add vector computing (or multiple other instructions) to be A, B, BranchAddr, VecA, VecB, etc. You can add whole x86 instruction set into one single mega instruction and get full modern PC functionality this way. Just it will result into several kilobits instruction length.
      So this is good mental exercising what everyone should avoid when designing good instruction set (or extension). Sad how only a few people realized this.

  • @arkoprovo1996
    @arkoprovo1996 5 років тому +8

    This is supreme!!! ♥ Waiting for the next one in the series!!!

  • @JacobP81
    @JacobP81 5 років тому +1

    11:20 Doing some figuring on paper leads me to believe that maybe address -1 is the output. This is what I wrote when trying to decode this:
    a=0
    Z=0-p (p=[H])
    a=0-(0-p)
    a:0 (-1)
    If H==100 then
    a=0
    Z=0-100=-100
    a=0-(-100)=100
    a:100 (-1)
    This shows me that (if im interpreting this right) the value at address 100 which is the letter H is being subtracted from the value at address -1. So that leads me to beleve the address -1 is reserved for screen output.
    Thanks for taking the 30 sec to explain that.

  • @mahyar24
    @mahyar24 3 роки тому +4

    I cannot believe the quality of your channel content. You are AWESOME!

  • @JohnJones1987
    @JohnJones1987 5 років тому +8

    Because you always change the sign (due to using subtract), if you need to add you just subtract from 0. This means addition takes 2 operations and 1 extra memory location (to store the 0) compared to also having an addition operator.
    Because you need extra entropy in the form of data storage to pull this off, the subtraction-only computer is actually an addition and subtraction computer, but one of the ops is stored with the data itself (assuming Two's Compliment). You can't pull this off without the second operation being in the data itself, because ultimately negative values don't really exist in binary. Negative numbers are simply positive numbers (or rather, vectors - and positive and negative are the direction of the vector) with a yet-to-be-executed operator encoded along with them. -2 is "subtract 2's worth of value". +2 is "add 2's worth of value". 2 is a number. -2 is nearly a number, but convention says if the second value isn't supplied, use 0. Fractions are the same, two numbers yet to be divided. They aren't values in their own right yet. It's a beautiful example of how the line between operations and data is way more blurred than one might think.
    The addition of 1 needs also needs a "1" to be stored somewhere. This requires another bit of memory to store, and so in totality the subtraction-addition-and-increment is as entropy dense as a computer that has three op codes, addXY, addXY (but sign flip Y), and add1. They are the same amount of "complex", it is only that the former has more constants, and the latter more operations.
    TL;DR: It's only a single operation computer because of how the term "operation" is defined, and really when one says "special subtraction" one really means "the default is subtraction but constants stored in data can also modify it's behaviour". I'm not trying to nit-pick or be a dick or anything, i'm just trying to explain how you can't actually get Turin Completeness with 1 operation, you have to be a little crafty with the definition of operation.

    • @nathangamble125
      @nathangamble125 5 років тому

      Depends on format. You can just have a +/- bit.

    • @JohnJones1987
      @JohnJones1987 5 років тому

      @@nathangamble125 Yeah that's Two's compliment :) The first bit defines the sign, and in this instance, defines if your going to be doing addition or subtraction

    • @styleisaweapon
      @styleisaweapon 3 роки тому +1

      I think its crazy that _just yesterday_ I added a comment to a library I've been writing, which reads exactly "//..because machine words have no sign" and today I am reading your youtube comment - my library comment is about the efficiency of a function on signed vs unsigned values - not all programmers may realize conversion from signed to unsigned is a completely free operation because its abstract - it may still lead to more or less efficiency because the abstract idea decides which future instructions to use (on AMD64, there are more forms of the signed multiply instruction than there are for unsigned multiply, even a 3-operator form...,) but thats a whole 'nother kettle of fish.

    • @JohnJones1987
      @JohnJones1987 3 роки тому

      @@styleisaweapon Haha, well, those who forget the past are doomed to refactor code in the future or something :)

    • @salman-11924
      @salman-11924 2 роки тому

      I was foolish enough to believe it literally. So I paused the video and attempted to implement addition using subtraction(easy), then multiplication and division follows. I could implement inverting using subtraction but I realized it is impossible to implement AND and OR using subtract. Thus, no turing completeness. The fact that it is a single Assembly instruction is true but never a single data operation. The LEQ part implements all branching and looping necessary that is fundamentally based on the three basic boolean gates.

  • @mbk0mbk
    @mbk0mbk 5 років тому +50

    This reminds me of NAND gate one of a universal gate .

    • @tektel
      @tektel 5 років тому +3

      My thoghts exactly

    • @gordonlawrence4749
      @gordonlawrence4749 5 років тому +7

      Both NAND and NOR can be used. I can never remember which is which but one was best for TTL and the other best for NMOS (which has been replaced by CMOS now anyway). You can make a basic 2 input NOR with three resistors and an NPN transistor that more or less works.

    • @zzasdfwas
      @zzasdfwas 5 років тому +2

      Yeah, NAND and NOR are just dual -- isomorphic with respect to swapping high and low.

    • @jeroenstrompf5064
      @jeroenstrompf5064 5 років тому +1

      Indeed. Would there somehow be a link between the two? I forgot the name of the mathematical rule to do stuff with NAND and OR. There was something with a double negator and swapping OR and AND

    • @betta67
      @betta67 5 років тому +4

      @@jeroenstrompf5064 I don't know about the double negation (rule) but the swapping if I recollect correctly is given by de Morgan´s Theorem - There are two “de Morgan´s” rules or theorems,
      (1) Two separate terms NOR´ed together is the same as the two terms inverted (Complement) and AND´ed for example: not(A+B) = not A . not B
      (2) Two separate terms NAND´ed together is the same as the two terms inverted (Complement) and OR´ed for example: not(A.B) = not A + not B
      or=+
      and=.

  • @luisramalho603
    @luisramalho603 3 роки тому

    When I click on this and saw you starting to explain the subleq instruction, I've immediately imagined something like what you has described.
    Maybe because in my times as IT engineering student, I learned to program for the URM machine. The
    Unlimited Register Machine, has only 4 instructions. All registers start at zero. Instruction Z(r), resets/zero the register, instruction S(r) put in the register the successor of is value. Instruction T(r1, r2) transfer/copy from register r1 to register r2, and J(r1,r2,i) jumps to instruction i, if registers r1 and r2 are equal. So one can create programs to add, to subtract, then with those we can create multiplications and divisions, and so on.
    Curiosity, all programs for URM are numerable, that is, there is a way to transform any program into a single number, or to transform a number (no matter how big it is) back into a program.
    As for the SUBLEQ, you didn't told how to set values in memory. Will 7, 5, 1, 25, or 173 magically appear in memory, if we have to use them?
    However this OISC is awesome!

  • @12Tsurugi
    @12Tsurugi 5 років тому +21

    Now imagine being the guy tasked to write a C compiler to this instruction set

    • @webgpu
      @webgpu 5 років тому +8

      i think it is easy if you just map each assembly instruction to the corresponding long list of SUB's

    • @alerighi
      @alerighi 5 років тому +6

      Well, they wrote a C compiler that only utilizes MOV instructions for x86 (it turned out that MOV it's turing complete by itself). It's called movfuscator

    • @JuusoAlasuutari
      @JuusoAlasuutari 5 років тому +6

      Check out movfuscator. This is rule 34 for compilers: if you can think it, it probably exists already.

    • @JGunlimited
      @JGunlimited 5 років тому +1

      The link shared in the video description contains a C compiler for SUBLEQ (though only for a subset of the the C language)

    • @styleisaweapon
      @styleisaweapon 3 роки тому +2

      writing a compiler isnt hard - ive done it several times in my life - it really really isnt - whats hard is doing it well

  • @edanne3308
    @edanne3308 3 роки тому +1

    Hey Gary! Another neat One Instruction Set Computer design is a Transport-Triggered Architecture, where you only have MOV, but by MOVing data into special registers you can perform arbitrarily defined operations on that data; you might have two registers, ACC and SUB, and to compute a minus b you MOV a ACC, MOV b SUB, where the SUB register takes in data and subtracts it from ACC at every clock cycle.

  • @baneblackguard584
    @baneblackguard584 5 років тому +3

    you should be able to design a cpu with only one instruction and two registers. one register contains the set of bits to be changed, the other register contains a set of bits that defines which bits are to be flipped. it would be a nightmare to try design the rest of the computer around the cpu, and I wouldn't want program for it, but you could do it. it would be up to the programmer to decide which bits needed to flip for any particular situation, making it not fun to program, but you could do it and it would be a very fast cpu. because it's just the one instruction it would be relatively simple to design the system for as many bits as you wanted. you could design the machine to use 256 bit registers, 1024 bit registers, whatever. actually getting it to accomplish anything would be a nightmare, but if someone were willing to tackle making a usable programming language for it, it would be very very fast.

    • @baneblackguard584
      @baneblackguard584 5 років тому

      @Richard Vaughn i could see why you would want to add that instruction, but it shouldn't be necessary as long as it pre-established where it's getting and putting data and the programmer to design the program accordingly. as long as you know before hand where the cpu is going to get and put data in a given cycle you can design the program accordingly and pre-initialize data in appropriate memory locations accordingly. it would be a nightmare to sort out, but it SHOULD be possible.

  • @mandolinic
    @mandolinic 3 роки тому +1

    How to create a one instruction computer: Take ANY instruction set, redefine every instruction as EXEC with options. Job done!

    • @GaryExplains
      @GaryExplains  3 роки тому

      No, because this has no options, it does the same thing every time. Adding qualifiers or options is just disguising multiple instructions with the same op code. That is why MOV on x86 doesn't qualify for this. Notice there is no instruction code in these programs.

    • @mandolinic
      @mandolinic 3 роки тому

      @@GaryExplains Well of course. Couldn't you tell I wasn't being serious?

    • @GaryExplains
      @GaryExplains  3 роки тому +1

      No, sorry I couldn't. My bad. But in my defense: A) There were no emojis to suggest you were being funny. B) I get plenty of stupid comments all day by people who are being serious but they are completely wrong about what they write.

    • @mandolinic
      @mandolinic 3 роки тому

      @@GaryExplains It's my dry sense of humour. Cheers!!

  • @name_here___4070
    @name_here___4070 5 років тому +4

    As it turns out, the move instruction is also Turing complete. I remember someone made a compiler that uses only move instructions a few years ago.

    • @styleisaweapon
      @styleisaweapon 3 роки тому +1

      requires being able to mov into the instruction pointer, an operation that while on the surface looks like any other mov instruction, is a separate instruction, with a separate opcode, and only a single operand. I am referring to that compiler you mention. You will find that not all assemblers use such ambiguous syntax as the standard "intel syntax" indicating that the syntax does not define the instruction. Even among the 64-bit general purpose registers, there are 4 register-to-register mov opcodes in play. (legacy to legacy, legacy to r8+, r8+ to legacy, and r8+ to r8+)

    • @leexabyz
      @leexabyz 3 роки тому

      If you are referring only to compiler, ignore the following.
      iirc, some architectures have the instruction pointer in memory. So in that sense, it is still valid.
      The intention here is more towards making a new CPU and instruction set that is Turing complete, and not really to find an existing instruction that makes all existing CPU's Turing complete on its own.

  • @brixomatic
    @brixomatic 5 років тому +2

    Stellar video! Very nice. Loved every bit of it.

  • @BachPhotography
    @BachPhotography 5 років тому +3

    Cool! Reminds me of a Textbook I read in University called Nand-to-tetris which went through the steps of building an entire VM that has a playable version of tetris, just from building an initial nand gate

    • @bcn1gh7h4wk
      @bcn1gh7h4wk 5 років тому +1

      rough size of that build?
      I bet it would be huge.

  • @ethanlee9633
    @ethanlee9633 3 роки тому +2

    All a computer does is essentially add numbers together anyways. It does this billions of times a second. From addition, all other mathematical operations can be formed such as division, multiplication and subtraction. From there, further layers of abstraction can be built upon to form a fully functional computer. Anyone interested in learning more should study discrete mathematics and take an introduction to computing systems.

  • @KuraIthys
    @KuraIthys 5 років тому +18

    That's an impressive idea, given that a textbook turing machine, while only slightly more complex, is NOT a single instruction machine... XD

    • @LoganKearsley
      @LoganKearsley 5 років тому +7

      That depends on how you define "instruction" in the context of the Turing architecture. A Turing machine certainly can be modelled as a single-instruction machine whose single instruction takes the state transition table as input.

  • @hunglekhanh2007
    @hunglekhanh2007 3 роки тому +1

    Substract and Check value and Conditional Jump in a single instruction!

  • @drake47367
    @drake47367 5 років тому +9

    My brain has only one instruction, and it's NOP.

  • @antonnym214
    @antonnym214 5 років тому

    Excellent video! Wow, i would hate to program that processor, though. Writing a language like Sweet16 would be laborious. I designed a CPU instruction set with only 16 instructions, so that we could keep the opcode to 4 bits. My instruction set is: Add, AND, NOT, OR, SHR (shift right), SUB, XOR, LDA #, PSH, Pop, RDA (read memory into A), STA (store), JC (jump if carry set), JN (jump if negative), JZ (jump if zero)... That's it. I used to program in Z-80 assembly, so this is what I thought I could live with if I had to write a language in it. I'm amazed you have done that! I have subscribed.

  • @nneeerrrd
    @nneeerrrd 5 років тому +9

    SUBLEQ is actually a complex instruction as it contains branching instruction. So your computer is actually CISC machine. Period.
    Proving: If I follow your logic, I can then make "single-instruction" CPU with instruction like SUBLEQSINJMPRET... Technically it would be "single instruction", with a lot of implicit logic behind the scenes.

    • @varkokonyi
      @varkokonyi 5 років тому

      That depends on what we define as an instruction. It always does the subtraction, after all

    • @nneeerrrd
      @nneeerrrd 5 років тому

      @@varkokonyi my instruction also does the substraction, always

    • @fuckoffpleaseify
      @fuckoffpleaseify 5 років тому +2

      This is exactly what I was thinking. SUBLEQ is neat, but it's a little bit disingenuous to consider it a true single instruction machine code.

    • @PBMS123
      @PBMS123 5 років тому

      @@fuckoffpleaseify Yeah exactly this. It's truly single instruction. Everything after subtract is just more instructions.

  • @ExEBoss
    @ExEBoss 5 років тому +1

    The machine code at 10:40 will keep looping until address 4 underflows into positive numbers, at which point it will do weird things.

    • @GaryExplains
      @GaryExplains  5 років тому +1

      Yes, that is correct. The code is given just as a example of how the machine works. It isn't, as you say, actually useful.

  • @altamiradorable
    @altamiradorable 5 років тому +3

    would be curious to program an FPGA to do just that !

  • @0xAA55_
    @0xAA55_ 3 роки тому

    @Gary Explains at 6:30 why not just set inc1=-1 and just a=a-inc1? You don't need the zero register so you halve the number of needed instructions.

    • @GaryExplains
      @GaryExplains  3 роки тому

      Yes, that would be more efficient, but the point of the demonstration was to show that if you want to add a number, in this case 1, then you start with the number you want to add and then turn it negative. It would be the same if you wanted to add 7. You want +7 but to do that you need to make it -7. It you were adding a + b then you would need the negative value of b, etc.

  • @gazlink1
    @gazlink1 5 років тому +4

    Nice video, it'd be good to see that CPU implemented in logic..
    And not just VHDL or simulated logic... Using some old fashioned NAND gates.

  • @e-maxwell
    @e-maxwell 5 років тому +1

    I want to see how all that works. Please make more videos about it!

  • @QuantumFluxable
    @QuantumFluxable 5 років тому +6

    Now implement it with NAND-gates!

    • @jakebrodskype
      @jakebrodskype 5 років тому

      Funny that you mention it. I knew a guy in who did just that in 1981.

    • @camgere
      @camgere 3 роки тому

      Two levels of NAND-gates can express any sum of products equation. For you minimalists, any multibit computer can be emulated with a 1 bit computer.

  • @batlin
    @batlin 5 років тому +1

    It's kind of funny that the ultimate RISC is also the ultimate CISC -- the main distinguishing feature between RISC/CISC is not really fewer instructions, but restriction of source/destination operand types to exclude memory access in RISC, so they used to be called "load-store" architectures because you had to use instructions like LW and SW to transfer data between memory and registers. Although MOV on x86 is even more bloated than SUBLEQ, allowing for all sorts of crazy operations (like copying a word between two memory locations and incrementing index registers at the same time in one instruction).

  • @sudipdas4596
    @sudipdas4596 5 років тому +34

    What are the advantages of using this type of CPU?

    • @GaryExplains
      @GaryExplains  5 років тому +114

      None

    • @alexandruilea915
      @alexandruilea915 5 років тому +5

      @@GaryExplains So why would anyone spend his time in building a language for it? I am the type of guy that says if it is not broken don't change it.

    • @rivox1009
      @rivox1009 5 років тому +54

      @@alexandruilea915 Research that can lead to innovation. Why would anyone have spent their time researching quantum physics when classical physics worked so well for so long? Consider that without quantum physics the computing revolution wouldn't have happened.
      Research can sometimes be without a clear practical real world advantage as its end, and still be extremely valuable later on.

    • @GaryExplains
      @GaryExplains  5 років тому +23

      @@alexandruilea915 Words fail me.

    • @shikhanshu
      @shikhanshu 5 років тому +15

      extremely simple, and so ultra low power and cheap.. disadvantage is that it will be slow.. (as slow as it gets)

  • @WeslomPo
    @WeslomPo 3 роки тому

    That kind of clever esoteric piece of software. Like it!

  • @WerIstWieJesus
    @WerIstWieJesus 5 років тому +11

    If we were able to transform such code efficiently to the assembler of every CPU, we would have a plattform independent assembler.

  • @zyxzevn
    @zyxzevn 3 роки тому

    I designed a mov computer in 1990. Has registers only. Many of which are specialized and are ports to funcional units. The functional units are ALU, stack, jumping, memory etc. Constants could be taken from the next instruction.
    Can be extremely small or do multiple instructions in parallel. I mainly focused on the latter and it seemed to work well for digital signal processors. Instruction size of 16 bits combined in words of 32 or 64 bits. DSPs ofren use wide instructions. Moves can also happen while the function unit is still working, like memory access. This allows very simple parallism if organized correctly.
    Never made it to hardware though. Later added a few condition upcodes from moves to the same place. This can be used when you want an extremely small instruction size of 8 bits with only 16 registers/ports.

  • @ano1nymus1
    @ano1nymus1 5 років тому +16

    Having multiple instructions is bloat.

  • @AndersJackson
    @AndersJackson 3 роки тому +1

    You can still have different mnemonics on different use of the SUBLEQ instruction. That is often done in ordinary architectures, like MIPS and ARM.

    • @GaryExplains
      @GaryExplains  3 роки тому

      There are no mnemonics since there is only one instructions with no variations.

  • @voncheeseburger
    @voncheeseburger 3 роки тому +5

    i'm gonna write a xilinx rtl for this i think

  • @jeroenstrompf5064
    @jeroenstrompf5064 5 років тому +1

    Wonderful! I've heard about the MOVE-processor in the 90's. Nice to see the concept now in practice. And although I haven't done machine code since halfway the 90's, your video feels like slipping on an old glove :)

    • @GaryExplains
      @GaryExplains  5 років тому +2

      It is called movfuscator but it uses different types of move (to address, from value, etc) so technically it is actually 3 or 4 instructions (I haven't check exactly how many).

  • @bcn1gh7h4wk
    @bcn1gh7h4wk 5 років тому +4

    position Z is acting as a registry: holds value, helps in processing, is returned to 0.
    it's a registry, for all intents and purposes.

    • @greenaum
      @greenaum 5 років тому +1

      It's a memory address, that's what all memory does.

  • @bbbl67
    @bbbl67 5 років тому +1

    Amazing, I wouldn't have even thought of this! If I had to guess, I would've thought maybe 10 instructions was the minimum possible, and I wouldn't have even known which of those instructions were necessary. Never would've guessed just 1 instruction, and that's an implied instruction at that, so no real opcode for it. I wonder if people are going to use this 1 instruction instruction-set as a proof of concept for future processors?

  • @lithostheory
    @lithostheory 5 років тому +4

    Therry Davis would have liked this...

  • @benhongh
    @benhongh 3 роки тому

    It’s often forgotten that the x86 is a RISC design with one instruction only. That instruction is PLEASE, which is often omitted because programmers are rather rude. Joke aside, this is an awesome video.

  • @hrnekbezucha
    @hrnekbezucha 5 років тому +5

    mov in x86 is Turing complete. Just saying. There is a movfuscator, a single instruction compiler for x86.

    • @GaryExplains
      @GaryExplains  5 років тому +5

      Indeed it is. It uses lookup tables for arithmetic, which is an interesting solution!

  • @_general_error
    @_general_error 2 роки тому

    You basically take control of load, store, sub and jlz instructions in one instruction. Nice.

  • @StefanReich
    @StefanReich 5 років тому +3

    I implemented a SUBLEQ machine as my 20% project at Google

    • @BokoMoko65
      @BokoMoko65 5 років тому

      In hardware ? or emulated ?

    • @StefanReich
      @StefanReich 5 років тому

      @@BokoMoko65 Emulated, in Java

    • @laurinneff4304
      @laurinneff4304 5 років тому

      @@StefanReich I did an emulated version today in JavaScript, and it was pretty easy

  • @JGunlimited
    @JGunlimited 5 років тому +1

    Would love the follow up video mentioned!

  • @mrpuneet1
    @mrpuneet1 5 років тому +4

    2:19 And I was thinking who is Link in description.
    lol

    • @nir8924
      @nir8924 5 років тому

      If it hasn't been added, it would just be OCD :)

  • @dontneedtoknow5836
    @dontneedtoknow5836 3 роки тому

    Glad that we figured out to multiply or decide by 2 is to just shift the bit once.

  • @1MarkKeller
    @1MarkKeller 5 років тому +11

    *GARY!!!*
    *Good Morning Professor!*
    *Good Morning Fellow Classmates!*

  • @schnueffelnase
    @schnueffelnase 3 роки тому

    That reminds me of the "Computer '74", a project of the Dutch magazin "elektuur". The name not only stands for the year, but also for the piles of 74xx chips used to build that monster. It was programmed in octal, it had no opcodes but every command consisted of two addresses: from and to. It also had a hardware multiplier and a hardware divider!! It has not only memory addresses, but also special addresses for the adder, divider, and of course an address for the program counter. Funfact: It already had a diode matrix for the microcode.

    • @schnueffelnase
      @schnueffelnase 3 роки тому

      Sorry, forgot the link: archive.org/details/Computer74

  • @blacksheepshepherd
    @blacksheepshepherd 5 років тому +3

    Simple; Power On, Power Off.

  • @WerIstWieJesus
    @WerIstWieJesus 5 років тому

    In this way the execution of a program can be visualized in the square address x address as the temporal progression from one point to another. An instruction is a point in the room address x address x address. The program itself is a discrete sequence of points in this room.

  • @nesnioreh
    @nesnioreh 5 років тому +4

    Great video. There is also a really nice talk here on UA-cam by Christopher Domas on how you can actually get away with only using the MOV instruction on x86. Branching is the biggest issue there, so you basically have to loop through all your code for every branch. But it's really neat, and a cool obfuscation method :) Look up "the movfuscator" on youtube.

    • @fss1704
      @fss1704 5 років тому

      Good lord don't let me fall on the hand of this kind of modafuckers. Imagine if they polymorfize the shit out of it.

  • @MatthewPegg
    @MatthewPegg 5 років тому

    All boolean operations can be made with just a series of "Nand" gates. For example tie the inputs of a Nand gate together you get a Not gate. Put that not gate on the output of another Nand gate and you now have an And gate. Arrange Nand and And in parallel and you can create OR/NOR with a bit more you can create XOR. You now have the 6 basic boolean operators from which you can design any digital circuit including a cpu. A Nand gate is the simplest gate to make as it only requires two transistors.

  • @emilemil1
    @emilemil1 5 років тому +2

    Why not copy a to b like this:
    b = z - a
    b = z - b
    Assuming z is always 0 then this should work just fine, but takes half as many operations.
    Edit: Looked it up, and it doesn't work because the instructions can't be represented in subleq. All instructions must be precisely on the form: x = x - y

    • @emilemil1
      @emilemil1 5 років тому

      @Richard Vaughn z is a constant 0, similar to inc1. It only has to be set once, it's not part of the copy operation.

    • @emilemil1
      @emilemil1 5 років тому

      @Richard Vaughn I'm referring to z in the way the video uses inc1, that's it. It assumes inc1 is 1, I assume z is 0. If you have an issue with that then would you say that the add example in the video is incorrect?

    • @emilemil1
      @emilemil1 5 років тому

      ​@Richard Vaughn Yes you can assume it's zero, it's done in the following example, it's done on the wiki page.

  • @kasuha
    @kasuha 3 роки тому

    A concept we discussed at school way back in time was a single instruction CPU whose only instruction was MOVE - take data from one location in memory and write it to another location. Now, it was cheating somewhat since it relied on existence of coprocessors which would watch certain memory locations and do arithmetic operations on them, putting result into another location. But while the SUBLEQ CPU is essentially an academic toy as pretty much any calculation is extremely inefficient, the MOVE CPU is much closer to how a real CPU microcontroller works. And it was still single instruction CPU in the sense that all instructions were move instructions.

  • @_god183
    @_god183 5 років тому +3

    It must have been bad at listening to instructions. An extra stupid CPU.

  • @helmutzollner5496
    @helmutzollner5496 4 роки тому

    Hi Gary!
    Interesting, but I am not sure if the increase in the number instructions is really giving a massive improvement in performance, even if we eliminate the instruction decoding.
    I read about a single instruction set CPU developed in Leyden university that I thought was very interesting, because it only has a load instruction with a very large number of registers that were input or output for arithmetic or logical instructions.
    That allows for the compiler to organize the code into multiple instruction pipelines. Therefore the parallel performance can be scaled by adding additional arithmetic/logical registers to the CPU.
    I thought that was a very interesting feature of that design.
    As to the subleq, to it really has only utility in illustrating the feasibility, similar to the original Turing machine. Sure you can write universal code with it, but I will be a collosal pain and is probably not very fast.

  • @bogywankenobi3959
    @bogywankenobi3959 5 років тому +9

    What you are describing is NOT a RISC machine. It imay well be a one instruction machine - but it is a complex instruction.

    • @insoft_uk
      @insoft_uk 5 років тому

      Bogy Wan Kenobi , RISC is reduced instruction set computer so I say Gary use of the term RISC is correct.

    • @bogywankenobi3959
      @bogywankenobi3959 5 років тому +6

      @@insoft_uk The word "reduced" does not refer to the number of instructions. It refers to the complexity of the instructions. The instruction he describes, once it gets the full instruction from the instruction memory, is then required to go out to memory, again, twice, to get the two operands and then decide where to go next. That by definition is a complex instruction. The fact that there is only one of them is irrelevant.

    • @110110010
      @110110010 5 років тому +3

      I always read the "Reduced instruction set" in RISC as a reduced set of instructions, not a set of reduced instructions. Do you have a quote or a link that would substantiate your interpretation?

    • @TheFakeVIP
      @TheFakeVIP 5 років тому

      Bogy Wan Kenobi Surely "Reduced," refers to the instruction set, not the instruction. So, by definition, a simple instruction set is one with few instructions.

    • @bogywankenobi3959
      @bogywankenobi3959 5 років тому +1

      @@110110010 Sure. Just do a google search for ARM or for that matter a youtube search for the same thing. It is not an interpretation. It is the definition. The fact that you don't know this is testimony to either your youth or lack of professionalism. In fact, google "arm vs. intel". As for me . . . I am 35 years a computer hardware engineer.;

  • @jensmandreasen2230
    @jensmandreasen2230 5 років тому +1

    That "special subtract", given the implicit move and branch, is a complex instruction

    • @GaryExplains
      @GaryExplains  5 років тому

      Indeed it is, but nonetheless it is one instruction.

  • @loric.23
    @loric.23 3 роки тому

    At 5:56 you assume there is a 1 in memory, which cannot result from the substract instruction itself. You need at least a second instruction to set a value to memory.

    • @GaryExplains
      @GaryExplains  3 роки тому

      Those values are loaded into the memory, just like the program itself is loaded into the memory. You don't need a second instruction.

  • @canoozie
    @canoozie 5 років тому

    There are other one instruction sets as well. I've built a couple in FPGAs. My latest is a transport triggered architecture, where the one instruction is a move from one register to another. All functional units (think your ALU, load/store unit, etc) have known addresses on the bus, and the instruction might also specify a sub address which allows the functional unit to perform multiple operations. One register that's connected to the functional unit will be the "trigger" instruction -- such that, whenever you schedule an instruction, and it's decoded, executed, if that instruction moves data into a functional unit's trigger register, the operation will be performed that was specified -- i.e., if i say to move the value `5` into the ALU's trigger function and specify the ALU's operation as addition, then it'll use the other input as the current value on the data bus, and the execution of the CPU happens as a side effect of the moving of data between registers. These are also known as exposed datapath computers. Really neat, and solve the VLIW register file pressure problem.

  • @ed.puckett
    @ed.puckett 3 роки тому

    Thank you, you are always thought-provoking!

  • @kalimbodelsolgiuseppeespos8695
    @kalimbodelsolgiuseppeespos8695 5 років тому +1

    Yes, but you need software cycles to simulate other instructions,in other words, you need resource to generate "more complex instructions". So, this is the way big instructions set, when correctly used can run faster a software, and reduce his "space or memory usage" at the same clock speed.
    This one of the (also license story vs x86 world.) way you are using an arm soc in your phone.
    But this thing sometime go, out of control.

    • @GaryExplains
      @GaryExplains  5 років тому +2

      Absolutely, nobody said this was efficient, this is an exercise in computer theory.

  • @webgpu
    @webgpu 5 років тому +2

    Haaa!!! That's *two* instructions in one. Not only a SUB, but also a BRANCH instruction fused into one. It is kind of a misleading video... *yet* i found the information in it AMAZING !!! (i just edited this comment lowercasing it:)

    • @GaryExplains
      @GaryExplains  5 років тому +2

      How do you define what "one instruction" means?

    • @webgpu
      @webgpu 5 років тому

      @@GaryExplains Ok, you could treat this question with two answers: Using a broad, abstract definition, it is "whatever you program the logic gates to do". In this case you could even build a "whole" program into One instruction, and give it a suitable name, like "CalculateTodaysWorldsMeanTemperature" passing as the only operand, a pointer to a table of relevant values to the calculation. The second answer could be, "one processing unit chosen from the minimal instruction set possible: , , , " these could be regarded as "one instruction" since they perform just one single, indivisible operation (explicitly given by its name). Regarding the second answer, the SUB instruction does , , and

    • @Waouben
      @Waouben 5 років тому

      The point of it is to build a general purpose processor using only one instruction. You can't add two numbers together with yours.

    • @webgpu
      @webgpu 5 років тому

      @@Waouben i don't know if i understood your question correctly, so i'll answer this way:
      I certainly can add two numbers using my four types of instructions ,,, .
      In this case, only three types of instructions are needed ,,.
      First number is in addr, second number in addr+1, result in addr+2, in z80-style:
      LD a,(addr); // load
      LD b, a; // load (reg.)
      LD a,(addr+1); // load
      ADD a, b; // logic op
      LD (addr+2), a // store

    • @Waouben
      @Waouben 5 років тому

      @@webgpu I'm talking about the CalculateTodaysWorldsMeanTemperature instruction. My point being that while you can design an instruction that can do anything you want, be it an entire program, most of them don't allow you to build a general purpose processor on their own. Which mean that however complex SUBLEQ is, it's still one instruction.

  • @JohnDlugosz
    @JohnDlugosz 5 років тому

    5:30 I'm interested in seeing how you can get a "1" from scratch. All memory locations contain random bits at the start. Generate the standard Fibbonocci sequence in consecutive memory locations.
    How few primitive operations can you get away with, if SUBTRACT is not sufficient?

    • @GaryExplains
      @GaryExplains  5 років тому

      The constants are loaded into memory as part of the program. Look at the example with the "7 7 7" at address 3 to 5.

    • @JohnDlugosz
      @JohnDlugosz 5 років тому

      @@GaryExplains I know. But I'm interested in figuring out the minimum instructions needed when memory is randomized at startup. Having a pre-loaded constant with part of the program is no different then having another kind of instruction -- you are not, in fact, providing only a list of one kind of thing. Relying on numbers being part of the encoding of other instructions, I think, is cheating; or at least is another "thing" that you are counting on. Try it with a Harvard architecture, or an encrypted instruction so you can't depend on some instruction containing a specific number within its encoding.
      The idea of "minimal" is a bit more slippery and subtle than "one instruction" leads us to believe. That one instruction does different things, conditionally. Imagine a VLIW control word -- that's one instruction, a uniform format for everything the CPU does. But it is far from minimal.
      Your "one instruction" combines the actions of three simpler instructions, so it is a chamilian if you ignore the affects other than the one you want. Why not an instruction that does 20 different things, all of which can be easily ignored? That is, the fact that there is only one instruction is not a true measure of austerity.
      It also relies on self-modifying code, and on the ability to read constants as well as abstract instructions.

    • @GaryExplains
      @GaryExplains  5 років тому

      @@JohnDlugosz If you allow 1 to be available at a hardcoded memory address (which would be a simple bit of electronics and nothing to do with RAM etc) then the answer is still 1 subleq instruction because you can create every other number by incrementing from 0.

  • @thegeniusfool
    @thegeniusfool 3 роки тому

    This is both a RISC and CISC architecture, since the latter is about the complexity of the indviidual operations. And I consider this minus one to be quite complex (involving two addresses and conditional jump.)
    Additionally, it is counter the idea behind RISC -- to only have simple instructions -- but since the formal definition of RISC deals with the cardinality of the set primarily, it is indeed RISC.

  • @ForboJack
    @ForboJack 5 років тому +2

    My first thought was to use add, but subtract is far better I guess :D

  • @TropicalCoder
    @TropicalCoder 3 роки тому +1

    Well that's hard core RISQ. I didn't know it was possible to have one instruction. I heard in my youth that the simplest machine required a NAND, a NOT, and a XOR or something like that. Don't remember now. You should have demonstrated how you can do bit manipulations such as these. Absolutely necessary!

    • @michaelbauers8800
      @michaelbauers8800 2 роки тому +1

      They may have been explaining that all logic can be built from a small number of logic gates. Apparently, you can build all digital logic from NAND, or NOR gates. In other words, choose one of the two, and then use those gates to build your logic. Sounds inefficient though.

  • @profounddevices
    @profounddevices 2 роки тому

    i really like this! this can be very efficient if pipelined and jmps are better managed

  • @karlscheel3500
    @karlscheel3500 5 років тому

    Binary subtraction is actually a form of *addition.* It works like this:
    1. Each bit of the _subtrahend_ ( i.e., the number that is to be subtracted from the other number, called _minuend_ ) is _inverted_ (i.e., if it's a one, it's changed to a zero, and vice versa), giving the _ones' complement_ of the subtrahend.
    2. The ones' complement is then incremented by one to give the _twos' complement_ of the subtrahend.
    3. The twos' complement is then *added* to the minuend, to give the result that includes a carry (i.e., an extra one to the left of the result, which is always discarded).
    At the hardware-level, computers always add; they *never* subtract.

    • @GaryExplains
      @GaryExplains  5 років тому

      There is also a version based on addition, it is called Addleq, but according to Olaf Mazonka it is much harder to program in Addleq than in Subleq.

    • @karlscheel3500
      @karlscheel3500 5 років тому

      @@GaryExplains We are talking about two different things. You are talking about microcode, and I am talking about what happens at the hardware-level (i.e., logic-gates). Computers _never_ subtract at this level; there's no such thing as a binary subtracter circuit. An adder circuit is used for subtraction. And to perform the operation of negation of any positive binary number actually means to take the twos' complement of that number.

  • @relic985
    @relic985 3 роки тому +1

    I'm guessing there's no silicon out there that is actually an OISC, right?

  • @danthe1st
    @danthe1st 3 роки тому

    Inwould guess...NAND where you specify oparand locations, destination and bit length

  • @alenkruth
    @alenkruth 5 років тому

    It's been a while I've watched Gary's videos. Good to be back 😌😌

  • @tonifasth
    @tonifasth 5 років тому

    This was very cool! Thanks for sharing.

  • @lottievixen
    @lottievixen 3 роки тому

    I thought this might be bbj (byte byte jump, it's on esoteric langs) but I've realised I might be misremembering it.
    thank you for a tech video without binary the abstraction is nice

  • @_dot_
    @_dot_ 3 роки тому +1

    You can add one by storing -1 instead of 1 in your "inc1" memory address, taking away two steps
    a = a - inc1
    Which is equivalent to
    a = a - (-1) = a + 1
    Let me know if this is slower than the example in the video for some reason

    • @WilliamLDeRieuxIV
      @WilliamLDeRieuxIV 3 роки тому +1

      6:01 -- This what is is being done.
      (1) *inc1 = 1*
      (2) *z=z-inc1 ; z = 0 - 1 = -1*
      (3) *a=a-z ; a=7- -1 = 8*
      In (1) if inc1 is set to -1 it would allow skipping (2) where z is being set to -1.
      However, the way this system works (with so many steps needed because of the lack of proper instructions):
      If you did this 100 times (you would do 200 sub-steps instead of 300 -- saving of 100 sub-steps).
      EG. 3N - N = 2N ; you would save at most N sub-steps.
      Performance wise this would still be O(N).
      Even though doing inc=1 or inc=-1 will save you N steps.
      It is RISC-style, clearly all the optimization (if any) would be done by the compiler.
      However, the compiler doesn't really have much to work with in terms of optimization (normally a register set along with load/store instructions, for starters).
      Because of that....the boost in performance from the above will be largely insignificant.

    • @GaryExplains
      @GaryExplains  3 роки тому

      Yes, that would be more efficient, but the point of the demonstration was to show that if you want to add a number, in this case 1, then you start with the number you want to add and then turn it negative. It would be the same if you wanted to add 7. You want +7 but to do that you need to make it -7. It you were adding a + b then you would need the negative value of b, etc.

    • @_dot_
      @_dot_ 3 роки тому

      @@GaryExplains I see. I guess it was the fact that you referred to it as a constant that threw me off. Thanks for the answer though and awesome video!

  • @nimrodlevy
    @nimrodlevy 5 років тому +1

    Damn! Thanks for this video, amazing video, like always, thanks for the time and effort!

  • @sadikwahid5554
    @sadikwahid5554 5 років тому

    Gary... You are awsome. Good explanation.

  • @timothycarpenter9947
    @timothycarpenter9947 3 роки тому

    This SUBLEQ is more of a 1 instruction CISC Computer. RISC Architecture is more about getting things done in 1 or the fewest possible clock cycles with optimized hardware.

  • @SouravTechLabs
    @SouravTechLabs 5 років тому +1

    Amazing video!

  • @ivocanevo
    @ivocanevo 3 роки тому

    Okay, now set the initial conditions for this code to generate the entire universe.