HK Lam
HK Lam
  • 76
  • 124 376

Відео

Particle Swarm Optimization - Part 4: Velocity Components
Переглядів 278Рік тому
This video is about Particle Swarm Optimization - Part 4: Velocity Components
Particle Swarm Optimization - Part 3: Local Best PSO
Переглядів 619Рік тому
This video is about Particle Swarm Optimization - Part 3: Local Best PSO
Particle Swarm Optimization - Part 2: Global Best PSO
Переглядів 829Рік тому
This video is about Particle Swarm Optimization - Part 2: Global Best PSO
Particle Swarm Optimization (PSO) - Part 1: Introduction
Переглядів 928Рік тому
This video is about Particle Swarm Optimization (PSO) - Part 1: Introduction
Cultering for Unknown Number of CLusters - Unsupervised Learning and Clustering
Переглядів 124Рік тому
This video is about Cultering for Unknown Number of CLusters - Unsupervised Learning and Clustering
Competitive Learning - Unsupervised Learning and Clustering
Переглядів 320Рік тому
This video is about Competitive Learning - Unsupervised Learning and Clustering
Hierarchical Clustering - Unsupervised Learning and Clustering
Переглядів 75Рік тому
This video is about Hierarchical Clustering - Unsupervised Learning and Clustering
Iterative Optimisation - Unsupervised Learning and Clustering
Переглядів 301Рік тому
This video is about Iterative Optimisation - Unsupervised Learning and Clustering
Fuzzy K-Means Clustering - Unsupervised Learning and Clustering
Переглядів 3,6 тис.Рік тому
This video is about Fuzzy K-Means Clustering - Unsupervised Learning and Clustering
K-Means Clustering - Unsupervised Learning and Clustering
Переглядів 137Рік тому
This video is about K-Means Clustering - Unsupervised Learning and Clustering
Unsupervised Learning and Clustering - Introduction
Переглядів 67Рік тому
This video is about Unsupervised Learning and Clustering - Introduction
Intelligent Control and Machine Learning - Concepts and Applications from Engineering Mind by HK Lam
Переглядів 2952 роки тому
This video is about Intelligent Control and Machine Learning - Concepts and Applications from Engineering Mind by HK Lam This was a presentation given in Interdisciplinary workshop on emerging New Approach Methodologies (NAMs), held in 24th March #machinelearning #controltheory #fuzzylogic
Ant Colony Optimization - Part 6: Ant Colony Systsem (ACS)
Переглядів 3 тис.2 роки тому
This video is about Ant Colony Optimization - Part 6: Ant Colony Systsem (ACS)
Ant Colony Optimization - Part 5: Example - Traveling Saleman Problem (TSP)
Переглядів 26 тис.2 роки тому
This video is about Ant Colony Optimization - Part 5: Example - Traveling Saleman Problem (TSP)
Ant Colony Optimization - Part 4: Ant System (AS)
Переглядів 2 тис.2 роки тому
Ant Colony Optimization - Part 4: Ant System (AS)
Ant Colony Optimization - Part 3.3: Simple Ant Colony Optimization (SACO) - Detailed Exaplination
Переглядів 1,9 тис.2 роки тому
Ant Colony Optimization - Part 3.3: Simple Ant Colony Optimization (SACO) - Detailed Exaplination
Ant Colony Optimization - Part 3.2: Simple Ant Colony Optimization (SACO) - Example
Переглядів 1,6 тис.2 роки тому
Ant Colony Optimization - Part 3.2: Simple Ant Colony Optimization (SACO) - Example
Ant Colony Optimization - Part 3.1: Simple Ant Colony Optimization (SACO)
Переглядів 3,8 тис.2 роки тому
Ant Colony Optimization - Part 3.1: Simple Ant Colony Optimization (SACO)
Ant Colony Optimization - Part 2: Stigmergy and Artifical Pheromone
Переглядів 1,6 тис.2 роки тому
Ant Colony Optimization - Part 2: Stigmergy and Artifical Pheromone
Ant Colony Optimization - Part 1: Introduction
Переглядів 3 тис.2 роки тому
Ant Colony Optimization - Part 1: Introduction
Evolution Strategy - Part 5 - Crossover Operators
Переглядів 5532 роки тому
Evolution Strategy - Part 5 - Crossover Operators
Evolution Strategy (ES) - Part 4 - Selection Strategy
Переглядів 6002 роки тому
Evolution Strategy (ES) - Part 4 - Selection Strategy
Evolution Strategy (ES) - Part 3 - (μ+1)-ES
Переглядів 8282 роки тому
Evolution Strategy (ES) - Part 3 - (μ 1)-ES
Evolution Strategy (ES) - Part 2 - (1+1)-ES
Переглядів 1,3 тис.2 роки тому
Evolution Strategy (ES) - Part 2 - (1 1)-ES
Evolution Strategry (ES) - Part 1 - Introduction to Evolution Strategy
Переглядів 1,9 тис.2 роки тому
Evolution Strategry (ES) - Part 1 - Introduction to Evolution Strategy
Continuous Genetic Algorithm - Part 2
Переглядів 8962 роки тому
Continuous Genetic Algorithm - Part 2
Continuous Genetic Algorithm - Part 1
Переглядів 1,2 тис.2 роки тому
Continuous Genetic Algorithm - Part 1
Central Limit Theorem: Verification using Exponential Distribution with mu = 5
Переглядів 2062 роки тому
Central Limit Theorem: Verification using Exponential Distribution with mu = 5
Central Limit Theorem: Verification using Geometric Distribution with p = 0.8
Переглядів 1472 роки тому
Central Limit Theorem: Verification using Geometric Distribution with p = 0.8

КОМЕНТАРІ

  • @r0cketRacoon
    @r0cketRacoon 20 днів тому

    0vO, with the same votes, can we just pick the one that is farthest from hyperplane?

  • @benson4225721
    @benson4225721 2 місяці тому

    Good explanation, thank you !!😀

  • @sagnikdash9060
    @sagnikdash9060 2 місяці тому

    Thank you sir

  • @HM-wm7xk
    @HM-wm7xk 2 місяці тому

    Hi, I can see this Salesmen Travelling Problem is symmetric, i.e., the distance matrix is symmetrical, the distance traveled between two points is the same. By the same way, can I proposed the pheromone travelled between two points in this problem be the same? i.e, the pheromone matrix is symmetrical in the Salesmen Travelling Problem? Will this through affect the solution?

  • @PivotStickmanAnimations
    @PivotStickmanAnimations 3 місяці тому

    always nice to learn a thing or two from elon musk.

  • @ilhamramadhan540
    @ilhamramadhan540 5 місяців тому

    Good video. Where are your sources from? So i can counter my prof the next time his ass ask where do i get my algorithm lol

  • @hawaiicashew3237
    @hawaiicashew3237 5 місяців тому

    Wow that is really clarifying a lot of things in one place. Very greatfull!

    • @hklam2368
      @hklam2368 5 місяців тому

      Thank you. 😀

  • @augusto712
    @augusto712 7 місяців тому

    Amazing video, thanks for sharing this valuable content, what are the advantages of the encoding/decoding process? Performance?

  • @shivambhushan5080
    @shivambhushan5080 8 місяців тому

    Sir please explain, if the original path chosen for each ant in the beginning is through transition probabilities and if yes how, or if it is random

  • @suleymanzerguine9920
    @suleymanzerguine9920 9 місяців тому

    Hello sir, How can we define the optimal theta ?

  • @yixingzhang2462
    @yixingzhang2462 9 місяців тому

    Many thank for six part videos. Great job!!! BUt I have a question? Could you tell me what't the different between the Part6(ACS) and Part5(ACO) video?

    • @yassenredwan8297
      @yassenredwan8297 5 місяців тому

      Based on my understanding. ACO uses the pheromones bias method which includes alpha and beta, where ants will be biased for what other ants used. Still, we can control that by having a larger alpha we favor old experience (exploitation) but by having a larger beta we bias exploration plus we have 3 types of updates -1-pheromones placement -2-pheromones evaporation (global update) -3-path quality update Q/L In ACS there is no alpha parameter to control pheromones from our end, but we have an additional type of update regarding pheromones which is a local update. The local update will be applied to every ant after visiting nodes A, B which will reduce the amount of pheromones to encourage other ants to explore another path, you can think of it as ants saying "Ay YO we tried this path, try different one" So basically we can say that in ASC pheromones evaporate or disappear 2 times each iteration (local update - global update), whereas in ACO pheromones will evaporate or disappear 1 time (global update) Hope this helps.

    • @yassenredwan8297
      @yassenredwan8297 5 місяців тому

      @yixingzhang2462

  • @amirhosseinhaja1601
    @amirhosseinhaja1601 9 місяців тому

    Hi , its great. how can we access to your notes?

  • @securityK
    @securityK 10 місяців тому

    支持

  • @aminuabdulsalami4325
    @aminuabdulsalami4325 10 місяців тому

    Awesome content !!! Thank you

  • @m0elj0n0
    @m0elj0n0 11 місяців тому

    Prof Lam, At 11:50 for the AS method: why the contributions from ant 1, 2 and 5 are 1/15? Would you please elaborate on this? Should they be the same as SACO (1/35, 1/55, 1/40)? Thank you.

    • @hklam2368
      @hklam2368 11 місяців тому

      Thanks for your comment. The contributions from 1, 2 and 5 are 1/15 because they follow the Ant-quantity AS contribution rule given on the bottom right-hand side of the slide at 11:50. The update rule is Q/d_{ij}(t).

  • @dylanmortimer5815
    @dylanmortimer5815 Рік тому

    Super helpful video, thanks heaps : )

  • @iscariot2506
    @iscariot2506 Рік тому

    I did not understand are hidden layer input weights "c" and centroids we choose "c" the same "c"?

  • @ahmadaskar3360
    @ahmadaskar3360 Рік тому

    how did you manage to put on ads, while you are only having 665 subscribers

  • @lakshminarayanarompicharla6693

    Professor please help in Stability analysis of closed loop TS Fuzzy system. Please provide your email I'd

  • @Mango-ej9ij
    @Mango-ej9ij Рік тому

    Dear Professor, could you please teach something about T-S fuzzy systems?😍

  • @anonymousvevo8697
    @anonymousvevo8697 Рік тому

    Thank you professor

    • @hklam2368
      @hklam2368 Рік тому

      Thank you.

    • @anonymousvevo8697
      @anonymousvevo8697 Рік тому

      @@hklam2368 Can i contact you regarding this presentation there is point i didn't understand and I'm working on a AI project? thanks

  • @tiigahwonbod8167
    @tiigahwonbod8167 Рік тому

    How can u check the convergence of this method

    • @hklam2368
      @hklam2368 Рік тому

      I didn't check the convergence. When applying the algorithm, if it diverges, could stop the algorithm and start from another initial position.

  • @tiigahwonbod8167
    @tiigahwonbod8167 Рік тому

    What is the stopping criteria of this method

    • @hklam2368
      @hklam2368 Рік тому

      The stopping criteria could be 1) the maximum number of iterations is met, 2) the solutions of the vertices are very closed to each other, 3) the best vertex in the past n iterations do not change much.

  • @ayoubsbai6339
    @ayoubsbai6339 Рік тому

    layhfdk a3chiri

  • @adilfarooq1430
    @adilfarooq1430 Рік тому

    best explanation of multiclass classification using SVM, thanks Prof. Lam.

  • @munshifahimuzzaman7275
    @munshifahimuzzaman7275 Рік тому

    Looks like not a lot of people study these though

  • @munshifahimuzzaman7275
    @munshifahimuzzaman7275 Рік тому

    Amazing resource, really!

  • @258lan8
    @258lan8 Рік тому

    Great video, thank you for sharing it

  • @MSCMSUGANDHAPRIYA
    @MSCMSUGANDHAPRIYA Рік тому

    What is the end of this method?how do I get to know the answer?

    • @hklam2368
      @hklam2368 Рік тому

      There should be a stopping criterion, e.g., the average change of the best vertices in the past 5 iterations is within a user-specified bound.If this criterion is met, the best vertex obtained in the last iteration is taken as the answer (solution).

  • @maulberto3
    @maulberto3 Рік тому

    Hi, you have to do NES and CMA-ES, that'd be perfect.

  • @maulberto3
    @maulberto3 Рік тому

    beautifully explained, thanks.

  • @maulberto3
    @maulberto3 Рік тому

    Hi, awesome lectures, going through them, quick and dirty nicely explained, thanks.

  • @martinh9099
    @martinh9099 Рік тому

    Excellent video..one question though, on the mobile robot doesn't the error angle have a polarity as well as a magnitude?

  • @triton62674
    @triton62674 Рік тому

    Thanks for the video, well explained!

  • @Darklaki1
    @Darklaki1 Рік тому

    thank you sir

  • @juanconstantine9741
    @juanconstantine9741 Рік тому

    Your Nelder-Mead Algorithm works very well Which source (paper, books) do you use for this video? Thank you

  • @jds1943
    @jds1943 Рік тому

    Could you upload the slides for the module if possible please

  • @jds1943
    @jds1943 Рік тому

    Would it be possible for you to upload the tutorial questions and solution pdfs for this module in a Google drive folder and share the link?

  • @johnjohnson5857
    @johnjohnson5857 Рік тому

    This is very well done. Thank you for the clear explanation.

  • @brandonteller5360
    @brandonteller5360 Рік тому

    This is awesome, but I have a question, on the slide at 3:50 what does lambda[i] represent?

    • @hklam2368
      @hklam2368 Рік тому

      It is the Lagrange multiplier for the i-th sample (or support vector)

    • @brandonteller5360
      @brandonteller5360 Рік тому

      @@hklam2368 Thank You, I am taking Mchine Learning at Cal State University and the material is much less dense than yours so I was struggling to follow, but now I see! I appreciate such a high level lesson!

    • @hklam2368
      @hklam2368 Рік тому

      @@brandonteller5360 Thank you, 😀

  • @alirad8996
    @alirad8996 Рік тому

    thank you

  • @mohamedhabas7391
    @mohamedhabas7391 Рік тому

  • @zaramlslamalislam4614
    @zaramlslamalislam4614 Рік тому

    Hi sir ,how can I download slides, because the link don't active

    • @tsuslt
      @tsuslt 7 місяців тому

      nms.kcl.ac.uk/hk.lam/HKLam/images/HKLam/presentations/Interval%20Type-2%20Fuzzy%20System%20and%20its%20Applications.pdf

  • @hildur7168
    @hildur7168 Рік тому

    Very helpful!

  • @Lux1431996
    @Lux1431996 Рік тому

    Hi from TU Kaiserslautern, Germany. Writing on my diploma thesis. Very thankful for your videos! Pretty much some of the best explanations to actually be able to write code for yourself at home in any language. Thank you!

  • @arunbali7480
    @arunbali7480 Рік тому

    Why we Use RBFNN for approximate unknown functions in nonlinear systems?

  • @sharmilavelu1713
    @sharmilavelu1713 Рік тому

    thank you sir for giving clear explaination

  • @gustavorolim5706
    @gustavorolim5706 Рік тому

    Great video. It helped a lot to implement the algorithm in the context of machine scheduling.

  • @aliyousif9319
    @aliyousif9319 2 роки тому

    Thanks