00:46 last lecture review 05:55 proof of relaxation correctness 17:36 DAG example with neg. edges (using topological sort) 27:43 Dijkstra demo ("greedy gravity") 33:29 Dijkstra pseudo-code 41:10 example 44:50 time complexity
id still recommend to watch the whole video to understand how the algorithm is actually working, instead of just writing code, in which case, stackoverflow is a better option
Had the samce lecture at my state university, it is very evident that MIT has the best professor. He was clear and very understandable; made a complex topic very easy to pick up.
I was making my project on Dijkstra algorithm....and suddenly found this .... amazing content from MIT ... thank you MIT .... I will explore and try to learn as much I can ☺️
A greedy algorithm is an algorithmic paradigm that follows the problem solving heuristic of making the locally optimal choice at each stage[1] with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time. For example, a greedy strategy for the traveling salesman problem (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution, but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.
For people who are brushing up their algorithm courses and have seen Dijkstra already, I recommend the following: 1. Subtitles on 2. 2x playback speed No practice needed, as you'll mostly read the subtitles and just quickly confirm whether what he said was the same thing.
00:01 Dijkstra's algorithm is a concrete algorithm for finding shortest paths. 03:14 Reconstructing the shortest path using predecessor and pi values. 09:16 The shortest path algorithm involves picking and relaxing edges until optimal delta values are obtained. 12:39 Triangle inequality and its application in shortest paths 17:36 DAGs (Directed Acyclic Graphs) are useful for finding shortest paths without negative cycles. 20:22 Dijkstra's algorithm is a special case shortest path algorithm that takes order v plus e time. 25:26 Final values obtained using Dijkstra's algorithm for a graph without cycles. 27:35 Dijkstra's algorithm is a greedy algorithm that incrementally builds the shortest paths. 31:35 Dijkstra algorithm greedily constructs shortest paths 33:26 Dijkstra algorithm initializes the graph, sets, and vertices for finding the shortest paths. 39:04 Read the formal proof for Dijkstra 41:04 Dijkstra algorithm execution steps 46:42 Dijkstra implementation with an array structure has a complexity of theta v squared 49:18 Dijkstra's algorithm has a complexity of theta v log v plus e time. Crafted by Merlin AI.
i literally don't get the triange inequality or why it's being used.. the weights of edges don't have to bear any resemblance to actual geometry.. and literally in his first example he had an example of a triangle where 3 < 1+1 which breaks that inequality.. WOT?
Good question! This is addressed in CLRS: " It may seem strange that the term “relaxation” is used for an operation that tightens an upper bound. The use of the term is historical. The outcome of a relaxation step can be viewed as a relaxation of the constraint v.d
At 48:20 he writes O(V*V + E*1) = O(V^2), and says: "because E is order V^2". Then at 51:00 - he writes just O(V lgV + E). But he said earlier that E is order V^2, so it means that O(V lgV + E) in worst case becomes O(V lgV + V^2), and this is again order O(V^2). So, in worst case Fibonacci heap does not help.
hmmm good observation... I guess just like in many things in CS, we just have to learn it even though we already know something better/just as good and easier.
if the graph is such that every vertex is connect to every other vertex, then E =O(v^2). if that is the case then the array implementation and fibonacci heap implementation both will be of O(v^2). but if that's not the case, then array implemenation will still be O(v^2) but now fibonacci heap's complexity will become O(vlogv + E). That's what i understood
You're right, but worst case it's not the only interesting case there is. Like, if we had a succession of graphs where E grows linearly with V for some reason, then worst case Fibonacci would be O(V Log V) which would be better than O(V^2). That's why we'd lose a bit of information if we'd just say the Fibonnacci implementation is O(V^2). On the other hand, dropping the E in O(V^2 + E) is fine, because V^2
It depends on the graph. For graph with a lot of edges, sure. But not all graphs are like that, so it’s still useful to find all sort of optimisations, bc for some graphs they still lower the execution time substantially in practice.
Great lecture. 39:30, Yeah I have a question. You don't actually show when you put the nodes into the queue. I was assuming that it was when you first visit the node, but in practice, this didn't seem to work. I had to reinsert them back into the queue if I changed d[u]. So, does that mean that this is not actually O(V+E)? Could the worst case be O(V*E)?
The functions d and delta describe the minimum weight paths, NOT the shortest paths. This is an important distinction that gets blurred a few times here
The Dijkstra's Algorithm in the example ending at 44.56 is not complete... 2 more steps pending which should get the following result in Q -> {0,7,3,9,5} the shortest path to D from the source is 9 and not 11.. if I'm not mistaken , the shortest paths are as follows :- A->B = 7, A->C = 3, A->D = 9, A->E = 5.. . had he completed the next two steps , this would be the answer.. Anyone like to back me up on this or maybe correct me if I'm wrong pls..
simon mikalsen I think you are talking about the undirected graph. Binomial works for directed. Assume two vertices v1 and v2. The total number of possible edges would be 4 which is 2^2 ( 2= total number of vertices). All possible edges v1 -> v2, v2 -> v1, v1->v1, v2->v2. They all add up to 4. But this is only correct if the graph is directed. If undirected, I think it should be v(v-1)/2.
here teachers do the work and yk in my school we had to prepare projects and yk thats a good question i used humiliates and told to go him if we asked remotely valid questions that the teacher didn even explain its the teachers that make this university i swear
I also thought that Dij ... can compute equal cost paths ...and unequal cost paths for balancing loads... not mentioned .. How to you determine pathing / routing loops? Why doesn't he mention these items?,
I believe there is a mistake in the pseudo-code for Dijkstra: relaxation should not be done for every vertex v adjacent to u, but only for those that are still in Q. Otherwise, you end up relaxing a vertex already extracted from Q and placed into S, further lowering its d; against the assumption that its d is final (equals delta) at the time you extracted it from the priority queue.
you couldn't do that unless you had negative weight edges. Dijkstra assumes non-negative weight edges and it can shown by induction that a vertex v is removed from Q only when d(s,v) = delta(s,v)
Relaxing such an edge won’t lower d for a vertex in S already. It might be unnessary to perform this relaxation, but it won’t break the algorithm, it’s not a mistake, technically.
Dijkstra priorities paths with small weights and might totally ignore paths with larger weights even though these paths weights might decrease in the future. for ex. if I want the shortest path from A to Z and I have two paths such as path 1 weights respectively : 5, 100, -105 path 2 weights respectively: 8, 2, 5 Dijkstra would walk like: path 1: 5 path 2: 8 path 1: 100 path 2: 2 path 2: 5 and present path 2 as the shortest path
guessing from the lecturer it would be safe iff : it takes a finite no of steps to solve the problem and it solves the problem for all of its instances
because with an array implementation, you can access elements in a constant time. array[0] = array[0] -1 however, with a min heap, if you decrease the root (the vertex with the smallest weight) you will have to check if this root still satisfied the minHeap property (it should be smaller than both its children) if it isn't, then we'll have to call build min heap or on the root, to keep our min heap property, which takes logv time.
He showed a base case and showed how relaxation works on (u,v). You assumed the principle holds for u and he proved it holds for v too and therefore generalizing it to every edge in the graph
I really would value more context as to why/how these things are important to know. Even if the answer to that is that I might need to be able to read a diagram some day.
Could someone explain me why for every implementation of the data stucture Q!the time complexity of Dijkstra algo is O(|E| * complexity of decrease key + |V| * complexity of extract min) ? thanks !
This presentation about Dijkstra's algorithm is a joke-like showing something to kids. Look for an explanation given by Erik Demaine. He will treat you like an adult.
why does he need a paper with him for the lecture? He is just reading off it and sounds like a robot. He should be smart enough to not rely on a paper telling him what to say.
Taking a mathmatics course and now referencing MLB Lenny Dykstra is so immature #twofoldreference you should be fired for wastubg resources that could have helped me be creating real effective change in communities but your a corrupt man who went behind my back with my aunt ( who would rather see me fail and not be joyful)
@@daddydangerous20 this lecture is not for the people who just kno the name and steps of an algorithm. Go watch a 5min video on this topic 2x youll get what youre looking for. This lecture is meant for the students who create new algorithms. no one expects dumb people like you to solve a difficult problem. you just follow things blindly
00:46 last lecture review
05:55 proof of relaxation correctness
17:36 DAG example with neg. edges (using topological sort)
27:43 Dijkstra demo ("greedy gravity")
33:29 Dijkstra pseudo-code
41:10 example
44:50 time complexity
27:41 Dijjkstra's Algorithm
Naineel Shah Balls!!!
Literally...
balls hanging.
eyyy that's why I'm not in MIT.
Naineel Shah thank you
thank you man
id still recommend to watch the whole video to understand how the algorithm is actually working, instead of just writing code, in which case, stackoverflow is a better option
Had the samce lecture at my state university, it is very evident that MIT has the best professor. He was clear and very understandable; made a complex topic very easy to pick up.
The mechanical demonstration is exactly the kind of thing you go to school to learn. Lovely demo. Such a cool way to think about the algorithm
I was making my project on Dijkstra algorithm....and suddenly found this .... amazing content from MIT ... thank you MIT .... I will explore and try to learn as much I can ☺️
Love the mechanical demonstration!
Headline news for Facebook: Researchers propose a new way of calculating shortest distance but their database literally has 6B marbles :)
Can you imagine a classroom lecture being broadcast to an audience of 85k? These are the times we live in.
TOTALLY!
Corona says it can very well imagine that
Welcome to 2020
I see a lot of folks like numbers.
250k and increasing!
A greedy algorithm is an algorithmic paradigm that follows the problem solving heuristic of making the locally optimal choice at each stage[1] with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.
For example, a greedy strategy for the traveling salesman problem (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution, but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.
Again, great lecture, but too tight with the camera angles: we generally need to see two backboards at once, not a closeup of the back of his head.
Just open the lecture notes pdf from the website
@@junweima where plz
@@junweima But they are for username and password
@@dipakraut6058 ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/lecture-notes/
@@junweima Thanks for the suggestion, it helped me !
This guy is a sheer genius. He takes so much efforts to make students understand what the thing is and the motivation to do so...Really amazing..!!
At 17:50 Algorithm for Dags (can have negative edges) (1st step Topological sort then 1 pass of relaxation from any source node)
For people who are brushing up their algorithm courses and have seen Dijkstra already, I recommend the following:
1. Subtitles on
2. 2x playback speed
No practice needed, as you'll mostly read the subtitles and just quickly confirm whether what he said was the same thing.
15:38 Relaxation is safe but this professor is dangerous.
This was a wonderful lecture. I finally understand Dijkstra's algorithm fully.
Analysis of different data structures for implementation of Dijkstra's queue:
44:52
saving for notes
Wow, the moment dr. Devadas said "alright let's work on an example of this to be clearer," I almost jumped out of the chair full of joy
LOL, "for posterity" at 28:49. Thank you professor!
Respect
He knows the stuff very well.
Very simple and effective to describe
I wonder if he has any more cushions...
00:01 Dijkstra's algorithm is a concrete algorithm for finding shortest paths.
03:14 Reconstructing the shortest path using predecessor and pi values.
09:16 The shortest path algorithm involves picking and relaxing edges until optimal delta values are obtained.
12:39 Triangle inequality and its application in shortest paths
17:36 DAGs (Directed Acyclic Graphs) are useful for finding shortest paths without negative cycles.
20:22 Dijkstra's algorithm is a special case shortest path algorithm that takes order v plus e time.
25:26 Final values obtained using Dijkstra's algorithm for a graph without cycles.
27:35 Dijkstra's algorithm is a greedy algorithm that incrementally builds the shortest paths.
31:35 Dijkstra algorithm greedily constructs shortest paths
33:26 Dijkstra algorithm initializes the graph, sets, and vertices for finding the shortest paths.
39:04 Read the formal proof for Dijkstra
41:04 Dijkstra algorithm execution steps
46:42 Dijkstra implementation with an array structure has a complexity of theta v squared
49:18 Dijkstra's algorithm has a complexity of theta v log v plus e time.
Crafted by Merlin AI.
i literally don't get the triange inequality or why it's being used.. the weights of edges don't have to bear any resemblance to actual geometry.. and literally in his first example he had an example of a triangle where 3 < 1+1 which breaks that inequality.. WOT?
It's not the weight of the edges ... it is the shortest path that is being compared ... helpful in writing proofs of correctness
dijkstra's algorithm 27:41
Why is the term 'relax" used? Im guessing infinity is too stressful?
Nice joke!
Good question! This is addressed in CLRS:
"
It may seem strange that the term “relaxation” is used for an operation that tightens an upper bound.
The use of the term is historical. The outcome of a relaxation step can be viewed as a relaxation
of the constraint v.d
At 48:20 he writes O(V*V + E*1) = O(V^2), and says: "because E is order V^2".
Then at 51:00 - he writes just O(V lgV + E). But he said earlier that E is order V^2, so it means that O(V lgV + E) in worst case becomes O(V lgV + V^2), and this is again order O(V^2). So, in worst case Fibonacci heap does not help.
hmmm good observation...
I guess just like in many things in CS, we just have to learn it even though we already know something better/just as good and easier.
if the graph is such that every vertex is connect to every other vertex, then E =O(v^2). if that is the case then the array implementation and fibonacci heap implementation both will be of O(v^2). but if that's not the case, then array implemenation will still be O(v^2) but now fibonacci heap's complexity will become O(vlogv + E). That's what i understood
You're right, but worst case it's not the only interesting case there is.
Like, if we had a succession of graphs where E grows linearly with V for some reason, then worst case Fibonacci would be O(V Log V) which would be better than O(V^2).
That's why we'd lose a bit of information if we'd just say the Fibonnacci implementation is O(V^2). On the other hand, dropping the E in O(V^2 + E) is fine, because V^2
@@dorsal937 >=*
It depends on the graph. For graph with a lot of edges, sure. But not all graphs are like that, so it’s still useful to find all sort of optimisations, bc for some graphs they still lower the execution time substantially in practice.
Great lecture. 39:30, Yeah I have a question. You don't actually show when you put the nodes into the queue. I was assuming that it was when you first visit the node, but in practice, this didn't seem to work. I had to reinsert them back into the queue if I changed d[u]. So, does that mean that this is not actually O(V+E)? Could the worst case be O(V*E)?
2:52 Binary tensor factorization should be useful (at some point), especially if the graph has structure.
28:30 -- incredible, intuitive model of Dijkstra
Excellent Tutor. Typical MIT style
The functions d and delta describe the minimum weight paths, NOT the shortest paths. This is an important distinction that gets blurred a few times here
Great lecture. The mechanical demonstration is really nice. Thanks.
awesome course lessons. really good stuff here
speed up at 2x,easier to focus on key point for me. no need to pause and write down everything,it will slow down my speed of understanding
after that demo there is no way I forget again how to do Dijkstra
Interesting another lecture about paths without using the term "hops" or that routers use these algorithms.
The Dijkstra's Algorithm in the example ending at 44.56 is not complete...
2 more steps pending which should get the following result in Q -> {0,7,3,9,5}
the shortest path to D from the source is 9 and not 11..
if I'm not mistaken , the shortest paths are as follows :-
A->B = 7, A->C = 3, A->D = 9, A->E = 5.. .
had he completed the next two steps , this would be the answer.. Anyone like to back me up on this or maybe correct me if I'm wrong pls..
Professors voice and personality reminds me of Dr Sheldon Cooper
Maneesh G lol
The Induction used at 10.00 seems pretty unclear to me. It says quitely vaguely that induction is used over the number of steps?
C’est absolument magnifique!
I do not understand the proof.
Why is O(E) = O(V^2)?
Because every vertex can be connected to every other, therefore binomial(|V|,2) = O(|V|^2)
Blaz Kelbl Thanks! :)
simon mikalsen
I think you are talking about the undirected graph. Binomial works for directed.
Assume two vertices v1 and v2. The total number of possible edges would be 4 which is 2^2 ( 2= total number of vertices). All possible edges v1 -> v2, v2 -> v1, v1->v1, v2->v2. They all add up to 4. But this is only correct if the graph is directed. If undirected, I think it should be v(v-1)/2.
think of a completed graph
here teachers do the work and yk in my school we had to prepare projects and yk thats a good question i used humiliates and told to go him if we asked remotely valid questions that the teacher didn even explain its the teachers that make this university i swear
Takeaway of this lecture: Gravity is greedy and causes balls to hang....
:P
I also thought that Dij ... can compute equal cost paths ...and unequal cost paths for balancing loads... not mentioned .. How to you determine pathing / routing loops? Why doesn't he mention these items?,
Dijkstra Algorithm 27:12
I believe there is a mistake in the pseudo-code for Dijkstra: relaxation should not be done for every vertex v adjacent to u, but only for those that are still in Q. Otherwise, you end up relaxing a vertex already extracted from Q and placed into S, further lowering its d; against the assumption that its d is final (equals delta) at the time you extracted it from the priority queue.
you couldn't do that unless you had negative weight edges. Dijkstra assumes non-negative weight edges and it can shown by induction that a vertex v is removed from Q only when d(s,v) = delta(s,v)
Relaxing such an edge won’t lower d for a vertex in S already. It might be unnessary to perform this relaxation, but it won’t break the algorithm, it’s not a mistake, technically.
Holy sh*t! That demo was beautiful! I loved it!
Dijkstra is my homie.
best lecture!
amazing content!
I am so confused with the symbolic statements actually. is there any way I can practise and learn writing symbolic algorithm.
i think he used those symbol in a way that seems rather difficult than it actually is !
Dijkstra starts at 27:42
Can someone help me with the code for this approach.
Why dijksta’s algorithm does not work if the graph has negative weight edge?
because it would always go back and fourth between the two nodes of that edge as the distance decreases every time.
Dijkstra priorities paths with small weights and might totally ignore paths with larger weights even though these paths weights might decrease in the future.
for ex. if I want the shortest path from A to Z and I have two paths such as
path 1 weights respectively : 5, 100, -105
path 2 weights respectively: 8, 2, 5
Dijkstra would walk like:
path 1: 5
path 2: 8
path 1: 100
path 2: 2
path 2: 5
and present path 2 as the shortest path
When do we call an algorithm safe?
guessing from the lecturer it would be safe iff : it takes a finite no of steps to solve the problem and it solves the problem for all of its instances
@@nanidachamman2645 now. Your answer has just made it interesting.
why is decrease key theta(1) but not theta(n), more keys longer time?
because with an array implementation, you can access elements in a constant time. array[0] = array[0] -1
however, with a min heap, if you decrease the root (the vertex with the smallest weight) you will have to check if this root still satisfied the minHeap property (it should be smaller than both its children) if it isn't, then we'll have to call build min heap or on the root, to keep our min heap property, which takes logv time.
The camerawork on this one is awful. So zoomed in.
@11:00: That is not a proof by induction.
+Chris Lee It definitely is.
He showed a base case and showed how relaxation works on (u,v). You assumed the principle holds for u and he proved it holds for v too and therefore generalizing it to every edge in the graph
I had no idea ikea made algorithms
Wonderful. Thankyou.
For notes geeksforgeeks helps for Small revision.
He is more like presenting the Chapters in CLRS, rather than actually teaching.
i love these videos!
why his voice is similar to a sundar pichai
Ya like DAGs? Dijkstra does.
I really would value more context as to why/how these things are important to know. Even if the answer to that is that I might need to be able to read a diagram some day.
Have you used Google maps or any other kind of journey planner? )
Isn’t it obvious why the problem of finding the shortest path is practical?
43:15
Kara tahtayı özlemişim
Tupolev Tu-95
lolll Professor Devadas you are hilarious
press ctr + w to get question while watching the video
Could someone explain me why for every implementation of the data stucture Q!the time complexity of Dijkstra algo is O(|E| * complexity of decrease key + |V| * complexity of extract min) ? thanks !
The algorithm extracts min at most |V| times and check each edge only once.
This man speaks like Christopher Walken :P
I feel bad to be dumb
You can be dumb in something’s once in a while. But not dumb in all things all the time. Hope abounds
This presentation about Dijkstra's algorithm is a joke-like showing something to kids. Look for an explanation given by Erik Demaine. He will treat you like an adult.
why does he need a paper with him for the lecture? He is just reading off it and sounds like a robot. He should be smart enough to not rely on a paper telling him what to say.
stutters too much, choppy delivery too
+ElphaTutorials maybe it is just a case when you do not have enough mathematical background, start with Calculus Revisited by H. Gross
attend my school :P
Get tenured and teach man
OSPF reprasent
Taking a mathmatics course and now referencing MLB Lenny Dykstra is so immature #twofoldreference you should be fired for wastubg resources that could have helped me be creating real effective change in communities but your a corrupt man who went behind my back with my aunt ( who would rather see me fail and not be joyful)
I don’t like sewing donuts
That is twistirs
he sounds like Rick from Rick and Morty... great lecture though.
Not explained very well. Hard to understand.
Karesh Arunakirinathan for you.
Compared to my class, this was explained extremely clearly.
@@evelyngalvan771 Bitch don't act so smart. This was a shitty explanation to a simple algorithm and you can't admit that.
@@daddydangerous20 ehh it was pretty good. He didn't rush it
@@daddydangerous20 this lecture is not for the people who just kno the name and steps of an algorithm. Go watch a 5min video on this topic 2x youll get what youre looking for. This lecture is meant for the students who create new algorithms. no one expects dumb people like you to solve a difficult problem. you just follow things blindly
like for posterity haha