"Do not forsake wisdom, and she will protect you; love her, and she will watch over you. Wisdom is supreme; therefore get wisdom. Though it cost all you have, get understanding." - Proverbs 4:6-7. Thank you Stanford.
Interesting. Lost of concern about bias, but then the root vulnerability of bias is found in modeling. If you want a specific outcome (a bias on equity versus equality, for example), model it and everything will be based on that. GIGO.
Can't we just find the max lenght of the two strings,in this case the max length will be of string 2 which is "The Cats" then use the LCS Algo using the DP(recursion) which returns the longest common subsequence and then substract it from the max length of the string. Can we approach this way someone please look into this!
I think doing this by LCS would be easy,First we find the max length of the two strings: int max(str1,str2){ s1=sizeof(str1); s2=sizeof(str2); if(s1>s2){ max=s1; } else{ max=s2; } return(max); } int LCS(m,n){ if(m==0) return(n); if(n==0) return(m); else{ if(s[m]==t[n]) return(1+LCS(m-1,n-1)); else a=min(LCS(m-1,n),LCS(m,n-1)); return(a); } } Finally return(max-LCS(m,n)) This way we can find out the minimum edit distance between the two strings. NOTE -> We have not consirdered the space while calculating the max! Please do correct if I am wrong anyone??
You check the cache FIRST before running all the computation "if (m,n) in cache => return cache(m,n)" lines at the top before everything else. So basically if the result is already in the cache then there is no need to run 3 computations again, just return the result
Such an amazing session. But i cant understand as to why eta is used in generating new value of w that too without conditions. Can someone clear this up. Would be much help
That was a bit quick right. 😅 If my math serves me right, eta is the value by which you jump after each iteration. Almost the same as the learning rate in which is in alot of ai stuff. I'm probably butchering the explanation. But all you need to know is that it is a parameter you play around with in these types of models and the lower it is the longer it takes for the model to reach the minimum and vice versa
Here the problem is relatively simpler i mean the graph is simple. There is just one minimum. In case where we have functions where there are more than 1 minimum, the slope is flat or there is a narrow pit in a graph, it becomes essential that we control the step size by which we decrease the gradient after each iteration otherwise we might miss the minimum. If we decrease the starting point everytime with a larger value we are decending down the graph too fast and at some point it will skip the minimum point and would never converge. Also if there at any point in graph a plateau then a very small step size would believe that to be minimum as it would never be able to cross it in such small iterations. So we play around with this value to get desired result and to reduce the error in order to have better predictions.
It is amazing that we are getting this knowledge for free!
It'll be more amazing when we'll have our private ai teacher in the near future
"Do not forsake wisdom, and she will protect you; love her, and she will watch over you. Wisdom is supreme; therefore get wisdom. Though it cost all you have, get understanding." - Proverbs 4:6-7. Thank you Stanford.
Participating in Stanford classes for free! Thank you so much.
seeing this in 2023 is quite intresting. yay baby
I finished first class today!
I'm trying to get into a university for ai engineering and this course is just what I needed!
Hey I haven't watch this course yet, does it required prior knowledge of CS or coding?
@@oanhhoang7047 its good (and recommended) to know some coding, but you can get through without having prior cs knowledge
@@oanhhoang7047not really
The real course begins at 4:52, with the origins of AI.
Excellent teacher! Enjoyable to listen to.
Ikr I’d be a Harvard graduate if all my teachers taught like him.
Thanks a lot indeed for sharing all this knowledge!
learning all video this can we become Artificial intelliger
Lecture begins at 2:45
This is amazing thank you!!! So refreshing and so unique.
its actually hard for a beginner but its amazing
agreed man, took me a few hours to get these codes straight
Wow, this is gold!!!❤
Interesting. Lost of concern about bias, but then the root vulnerability of bias is found in modeling. If you want a specific outcome (a bias on equity versus equality, for example), model it and everything will be based on that. GIGO.
tks for share... I loved theses class and the didatic teachers
Rewarding Content!!
24:48 I thought the dead silence after professors giving a question won't happen in stanford courses 🤣
Excellent lecturer
Thank you!
Glad this is difficult to learn. Means there will be few that get into it. Which means more 💲💵. At least for a decent period of time
thank u for your excellent course, but how can I reach the home works, I want to do them myself for practicing and better learning
dorsa ❤
Writing the code for a demo live in class is bawler.
Woah! This is so interesting
Can't we just find the max lenght of the two strings,in this case the max length will be of string 2 which is "The Cats" then use the LCS Algo using the DP(recursion) which returns the longest common subsequence and then substract it from the max length of the string. Can we approach this way someone please look into this!
I think doing this by LCS would be easy,First we find the max length of the two strings:
int max(str1,str2){
s1=sizeof(str1);
s2=sizeof(str2);
if(s1>s2){
max=s1;
}
else{
max=s2;
}
return(max);
}
int LCS(m,n){
if(m==0)
return(n);
if(n==0)
return(m);
else{
if(s[m]==t[n])
return(1+LCS(m-1,n-1));
else
a=min(LCS(m-1,n),LCS(m,n-1));
return(a);
}
}
Finally return(max-LCS(m,n))
This way we can find out the minimum edit distance between the two strings.
NOTE -> We have not consirdered the space while calculating the max!
Please do correct if I am wrong anyone??
Amazing
What are the two views?
In which platform does the code get executed
yo thanks man for ya knowledge
I didn't understand how the cache works. Can someone explain please?
1:14:47
We use cache after we do all computing ( after "result = min(subCost, delCost, insCost)" ) so how does it benefit to us?
You check the cache FIRST before running all the computation "if (m,n) in cache => return cache(m,n)" lines at the top before everything else. So basically if the result is already in the cache then there is no need to run 3 computations again, just return the result
Sir where I get school emails for piazza plate form
How I can get these lecturers 😢
My timestamp 01:06:20
30:00
little difficult
Its hard to understand the lecture, any suggestions
Do you know Python? Maybe it's hard because you don't know python syntax.
@@monkmode9138 okay..👍
Thank you
23:32
18:30 Kinda cool how we can recreate intelligence, it's called making babies.
Hey anyone in 2025👇,I think I'm too late to learn AI and Ml to get my dream Job 😢.
UFF INSOPORTABLE PALABRERIA
UFF NO OKEY. SO DISGUSTING BLA BLA BLAAAAAA
Such an amazing session. But i cant understand as to why eta is used in generating new value of w that too without conditions. Can someone clear this up. Would be much help
That was a bit quick right. 😅
If my math serves me right, eta is the value by which you jump after each iteration.
Almost the same as the learning rate in which is in alot of ai stuff. I'm probably butchering the explanation.
But all you need to know is that it is a parameter you play around with in these types of models and the lower it is the longer it takes for the model to reach the minimum and vice versa
Here the problem is relatively simpler i mean the graph is simple. There is just one minimum. In case where we have functions where there are more than 1 minimum, the slope is flat or there is a narrow pit in a graph, it becomes essential that we control the step size by which we decrease the gradient after each iteration otherwise we might miss the minimum. If we decrease the starting point everytime with a larger value we are decending down the graph too fast and at some point it will skip the minimum point and would never converge. Also if there at any point in graph a plateau then a very small step size would believe that to be minimum as it would never be able to cross it in such small iterations. So we play around with this value to get desired result and to reduce the error in order to have better predictions.
@@rolandduplessis5132you mean the step size
WHY DON'T YOU. EXPLAIN HOW THE WORD "ALGORITHM" COME FROM? IT'S THE SOUL OF AI!
😂😂 I am glad we have people like you in the world.
it is not relevant.