Testing The Train Intersection - Tesla FSD 12.3.3 Supervised

Поділитися
Вставка
  • Опубліковано 15 вер 2024
  • Testing the Train Intersection With Tesla FSD 12.3.3 (Supervised) - Model Y Long Range HW4

КОМЕНТАРІ • 12

  • @nickmcconnell1291
    @nickmcconnell1291 5 місяців тому +6

    Hi! Unlike in FSD 11.X, v12.x is all neural net. Therefore the "brains" of the car are not displayable on the screen. The screen display is only using the data the cameras pick up and that have been previously tokenized so they can be displayed correctly. In other words the screen image may have no direct correlation to what the car's brain is actually seeing or thinking.
    But don't worry since FSD is seeing and responding to way more than its creators or you know. More than likely FSD is, or will soon learn, human facial expressions and body posture and associate that with what the intent of that human is. You may find FSD v12 doesn't freak out around pedestrians even if they are close to the car like v11 did. Why? Because it can guess (predict) what the human will do next with astounding accuracy.
    How can it do this and what is going on inside its brain? An analogy as to what is going on in there is that the AI code is watching videos as training material and sticking all kinds of numbers and letters into millions of cells in a huge Excel spreadsheet. These numbers mean something to the AI but are not human readable.
    OK, based on whether it makes right or wrong decisions after it's training, the trainers adjust weighting algorithms and other factors and then retrain it. The AI code updates the information in the cells of the spreadsheet with no rhyme nor reason that we can know but eventually it starts behaving correctly. It is truly a "black box" and all we know are its inputs...e.g. a train crossing signal.... and its outputs...e.g. did it behave right.
    So the screen really doesn't display exactly what the car's brains are actually seeing and responding to. You can quickly see this when the car slows down for a speed bump or avoids a pool of water... neither of which show up on the screen.
    So the question becomes "Can we really trust it since we don't know why it's doing what it's doing?". In response I would say "Ye will know them by their fruits".

  • @didierpuzenat7280
    @didierpuzenat7280 5 місяців тому

    12:08 My car was doing regenerative breaking 30 years ago. It was a "Peugeot 106 Electric" (produced from 1995 to 2003) with a NiCd battery and 75 km of range. I had this EV for many years, until the model 3 arrived in France in 2018. French automakers (Renault, Peugeot and Citroen) made a dozen electric models in the 90s, I think they are the real precursor of EVs.

  • @GreylanderTV
    @GreylanderTV 5 місяців тому +3

    7:15 it's not learning, but the cars that are already through this time are enough of a visual cue for it to realize it can go through.

  • @kingkeenangaming756
    @kingkeenangaming756 4 місяці тому

    The Semi truck animations as the freight train lol

  • @XeLu62
    @XeLu62 5 місяців тому +1

    Saludos de mi 2023 Model 3 LR en Cadavedo, Asturias 🇪🇸 😊

  • @videoartsproductions1
    @videoartsproductions1 5 місяців тому +1

    It can't recognize objects at times because of camera hardware limitations and certain environmental factors that affect the camera's ability to pick up subtle details, caused by poor lighting conditions like sun glare etc. There is no self-learning function everything it recognizes is preprogramed, despite what people think, it's an internet myth. It mostly uses motion and objection detection and adjust according to those established parameters. If a virtual object is too small for the camera to detect it won't recognize it like small animals crossing the road, if it can't detect them, they don't exist. Also, if areas are obscured and have no defined path it's bound to fail like parking lots in strip malls, I've witnessed that many times. If you take the same route in the evening closer to dusk when lighting is more ideal, you might see improvements of certain scenarios that were not as good, actually improve. Night FSD driving might even work better in certain situations. Despite the drawbacks it's a nice novelty to have but nothing anyone should fully trust.

  • @Kaipiso
    @Kaipiso 5 місяців тому +1

    Maybe there is a spot you could temporarily park and time it so you would not need to do so many laps. Good test, though hard to replicate. Intersting because fsd often squeezes into tight spots. And ignores lanes as well.

  • @nickmcconnell1291
    @nickmcconnell1291 5 місяців тому

    As to why the car stopped one time at the railroad crossing (6:20) even when the lane ahead was clear is because there was no one in the left lane. The very next time with this same scenario there was a truck in the left hand turn lane. FSD could see that there was room for the truck so there must be room for it after crossing the railroad track... so keying off that truck it went ahead across the track. When there was no car there it can't size up how large the gap is visually so it played safe and didn't cross the tracks.

  • @oothewind
    @oothewind 5 місяців тому

    very nice keep it up

  • @cafl9844
    @cafl9844 5 місяців тому +1

    Still don't believe Tesla will take over full responsibility!

    • @jessestone117
      @jessestone117 5 місяців тому +1

      The robotaxi (to be revealed on 8/8) will have no steering wheel. Surely then they will need to take liability, no?

    • @TheSpartan3669
      @TheSpartan3669 3 місяці тому

      ​@jessestone117 It will be revealed but never implemented. You don't need to take liability for something that doesn't exist.