I think it's worth mentioning that the reason that all the examples you mentioned are directed and acyclic is that *time* is the common factor between them. I struggle to think of a practical application of DAGs that isn't specifically a DAG _because_ of time.
That is a fair point. We're generally talking about dependencies, and dependencies would consistently be forcing on thing to be done before another, which is about time.
Actually, it's a breadth-first search, deliberately limited so that a node is not taken until all of the nodes it depends on are taken. The in-degree array is how we ensure that we don't take a node until all of its dependencies have been taken, which is the point of a topological ordering or topological sort. You could also do a topological ordering depth-first, but it's generally agreed that the breadth-first approach produces the more desirable result in general.
The time complexity or the algorithm is generally V+E, where V is the number of vertices and E is the number of edges. We don't typically think in terms of N when discussing the complexity of graph algorithms, because there are two different relevant sizes: the number of vertices (V) and the number of edges (E). Setting up the initial array of indegrees may be either V^2 or V+E, depending on your choice of graph representation (adjacency matrix or adjacency list).
@@maryelainecaliff Oops sorry😄, my bad🤦♂, am addicted to performance and fast code and that pseudocode implementation just kinda flew right in my face, but yeah, understand what you mean 🙂... gracias for replying!😃
@@temitopehardhekheyhe7359 It's always important to remember that performance is a relative thing. There are problems for which O(n^2) is ridiculously slow and others where it is the minimum performance required. Computational complexity must always be considered in context.
It's amazing how you made it look like so simple. Accurate and precise explanation. Thanks a lot professor.
Glad it was helpful!
This is by far the best video I watched on this topic, thank you!
Thank you. I'm glad you found it helpful.
Really Well Explained!
Glad it was helpful!
The channel deserves more subscribers!!
Thanks for the kind words.
Great explanation. Thank you Professor!
I think it's worth mentioning that the reason that all the examples you mentioned are directed and acyclic is that *time* is the common factor between them. I struggle to think of a practical application of DAGs that isn't specifically a DAG _because_ of time.
That is a fair point. We're generally talking about dependencies, and dependencies would consistently be forcing on thing to be done before another, which is about time.
@@maryelainecaliff How about order? as opposed to time??
@@temitopehardhekheyhe7359 Do you have an example of an ordering that isn't time-based?
thanks, professor!
Great video, thanks!
Thanks mam learning from India .
Glad you found it helpful.
really good! thanks
Glad you like it.
thank you so much, you saved me! :D
I'm glad that you found the video helpful.
what if we have 3 vertices having 0 in degree in the initial step?
We add all three to the queue.
I don't understand, this is Depth First Search.. Why all the complication with In-Degree arrays ??
Actually, it's a breadth-first search, deliberately limited so that a node is not taken until all of the nodes it depends on are taken. The in-degree array is how we ensure that we don't take a node until all of its dependencies have been taken, which is the point of a topological ordering or topological sort.
You could also do a topological ordering depth-first, but it's generally agreed that the breadth-first approach produces the more desirable result in general.
@@maryelainecaliff thanks, but i did some tests with DFS and a hashmap that store visited and it's yelding the same result so what s the difference?
@@exe.m1dn1ght I can't tell you without knowing more precisely what you did.
Namaste 🙏 Teacher
Thank you.
I just can't help but think that this particular implementation might be kinda expensive?? O(N^2)??
The time complexity or the algorithm is generally V+E, where V is the number of vertices and E is the number of edges. We don't typically think in terms of N when discussing the complexity of graph algorithms, because there are two different relevant sizes: the number of vertices (V) and the number of edges (E).
Setting up the initial array of indegrees may be either V^2 or V+E, depending on your choice of graph representation (adjacency matrix or adjacency list).
@@maryelainecaliff Oops sorry😄, my bad🤦♂, am addicted to performance and fast code and that pseudocode implementation just kinda flew right in my face, but yeah, understand what you mean 🙂... gracias for replying!😃
@@temitopehardhekheyhe7359 It's always important to remember that performance is a relative thing. There are problems for which O(n^2) is ridiculously slow and others where it is the minimum performance required. Computational complexity must always be considered in context.
Khan's algorithm if I'm not mistaken. en.wikipedia.org/wiki/Topological_sorting
wow...