Lecture 7: CS217 Perceptron Training Algorithm & Neural Foundations | AI-ML | IIT Bombay | 2025

Поділитися
Вставка
  • Опубліковано 31 січ 2025
  • Welcome to Lecture 7 of the CS217: AI-ML Course by IIT Bombay, delivered by Prof. Pushpak Bhattacharya. This lecture introduces fundamental concepts of neural computation through perceptrons and provides a rigorous mathematical proof of the Perceptron Training Algorithm convergence.
    🔎 Topics Covered:
    Hardware-Software Correspondence in Brain Computing
    Maslow's Hierarchy and Its Relevance to AI Systems
    Historical Evolution: Symbolic AI vs Connectionist Approaches
    Perceptron Model: Basic Structure and Computation
    Boolean Function Computation using Perceptrons
    Threshold Functions and Their Limitations
    XOR Problem and Non-linear Separability
    Perceptron Training Algorithm (PTA): Step-by-Step Implementation
    Convergence Theorem: Detailed Mathematical Proof
    Geometric and Algebraic Understanding of PTA
    This lecture provides essential foundations in neural computation, combining theoretical rigor with practical understanding.
    #artificialintelligence #machinelearning #iitbombay #neuralnetworks #perceptron #pta #ai #ml #computerscience #cs217 #aicourse #symbolicai #iitb #perceptrontraining #linearseparability #maslowhierarchy #braincomputing

КОМЕНТАРІ •