thank you for the explanation sir. but i didn't understand yet about how can the omega wolfs or another search agent to update their position? thank you
thanks for the good explanation, i have a question in the formula of updating the position why you are using the prey position instead of the position X(t), X(t+1) = X_prey(t) - A*D and not X(t+1) = X(t) - A*D ???
@@thealimirjalili Sir, I don't understand it either. I've just enrolled in your Udemy course and noticed this difference between the mathematical expression you explain in the video and the code you write to visualize it. Could you please be more specific about it?
What I was trying to say is that X(t+1) = X(t) - A*D is a generic position updating eqution. X(t) can be the previous position of a solution, the position of any other wolves, or even the prey. In the theory video, I cover the generic formula here but coding is the actual implementation of GWO. A lot of people explored other position updating as well. I hope this helps.
It's really good explanation of algorithm... please suggest me binary GWO as there are many variants of binary GWO in literature....I want to find optimal combination of base learners for stacking ensemble
@@abdulmatinhublikar3050 you can find a lot of resources including the source code in my Udemy courses. But if you just want the code, it is freely available here: seyedalimirjalili.com/gwo
Using Grey wolf optimizer, I want to calculate optimal values of two hyper parameters: context window size and embedding size (vector dimensions) for word2vec skip-gram with negative sampling. (I want to use this model to find the top 25 similar words of a token). Any idea how to do that !!! Thanks in advance.
آقای دکتر واقعا عالی توضیح میدید. ممنون بابت این ویدئو و کشف این الگوریتم.
Thank you sir for updating the channel
No worries. Sorry for the long wait :)
Thanks a lot! Best regards, Dr.
clear explanation, cool staff. Thank you sir.
genuine regards from Iran
Thanks a lot. Now im using this in part of my research
Thank you sir for this...we need mathematical model for binary grey wolf optimization algorithm
Great job!
Thanks. How to understand the strengths and weaknesses of such algorithms?
Thank you very much for the brief explanation, How we can use the GWO to solve the shortest path problem
Hey, you have worked on uav?
Professor, thank you. I want to know how to add two variables in the main program in the optimization method for gray wolves
Is there any way to get a working sample matlab code for GWO?
Thank you so much for comming back 😍
My email does work ❤
Thanks Kalim. Appreciate your kind message.
@@thealimirjalili You welcome sir ❤
@@kalim8bp Bro Can you please share sir email id.
Thank you.Sir.It's very interested!! What about fault tolerance Algorithm of GWO?
Thank you so much sir for making it simple. slides are too much insightful .If you don't mind can you please share slides.
Thank you for this presentation, we would like to know if you can do the dame for Whale optimization algorithms (WOA)
This is coming soon. Stay tuned
we r waiting sir
Thank you, sir, What are the bio-inspired optimization methods used to solve the shortest path problem?
Shouldn't A be in the interval [-a,a] instead of [-2a,2a]?
Hello sir, what does dot mean in the A and C formula? Is it still pairwise multiplication or just regular multiplication?
Thank you very much sir
Thank sur for your work Can you please explain how to thé thé hybridation of pso and gwo
thank you for the explanation sir.
but i didn't understand yet about how can the omega wolfs or another search agent to update their position? thank you
hello sir.. can u please guide how to implement GWO to a model in python please .. as i hav e a model and data in train test and val set ....
shouldnt acceleraation vector A be between the ranges of [-a,a] not [-2a,2a], since the random value is belongs to [0,1]?
thanks for the good explanation, i have a question in the formula of updating the position why you are using the prey position instead of the position X(t), X(t+1) = X_prey(t) - A*D and not X(t+1) = X(t) - A*D ???
The second is a generic form of first
@@thealimirjalili Sir, I don't understand it either. I've just enrolled in your Udemy course and noticed this difference between the mathematical expression you explain in the video and the code you write to visualize it. Could you please be more specific about it?
What I was trying to say is that
X(t+1) = X(t) - A*D is a generic position updating eqution.
X(t) can be the previous position of a solution, the position of any other wolves, or even the prey.
In the theory video, I cover the generic formula here but coding is the actual implementation of GWO. A lot of people explored other position updating as well. I hope this helps.
Hi Sir, thank you
My problem is that I don't know how can model my problem,
nice tutorial
Thank you; I have a question about the algorithm : How do you calculate the position of prey Xp ?
sir i want to ask that i have a certain data which i want to put in GWO, and i want to implement data in ANFIS-GWO. sir where can i get these coeds.
I like the video as you explained the concepts rather well, but you used the same notation for the vectors and scalars, which was confusing.
Please, could you make a course about whale optimization?
I am working on it
@@thealimirjalili Thank you
@@thealimirjalili Thank you very much it will help me a lot
It's really good explanation of algorithm... please suggest me binary GWO as there are many variants of binary GWO in literature....I want to find optimal combination of base learners for stacking ensemble
Sir,
Can we consider fouth best vector in GWO
Please explain Matlab code of this algorithm, thank you
Hi Abdul. The coding and more videos are available in Udemy. Udemy does not allow me to upload more than this on UA-cam.
@@thealimirjalili sir can I get the code if I join your course on udemy?
@@abdulmatinhublikar3050 you can find a lot of resources including the source code in my Udemy courses. But if you just want the code, it is freely available here: seyedalimirjalili.com/gwo
Using Grey wolf optimizer, I want to calculate optimal values of two hyper parameters: context window size and embedding size (vector dimensions) for word2vec skip-gram with negative sampling. (I want to use this model to find the top 25 similar words of a token). Any idea how to do that !!! Thanks in advance.
سلام .همین ویدیو به فارسی هم هست ؟
Besiar awli mohandes manam mozo payan name behine yabi ghabe khameshi foooladie ba algoritme shomas
sir can you please tell me the laptop model !!
Great job!