Mastering Dynamic Programming - How to solve any interview problem (Part 1)

Поділитися
Вставка
  • Опубліковано 21 лис 2024

КОМЕНТАРІ • 445

  • @dave6012
    @dave6012 Рік тому +339

    Wow, “recursion + memorization” is the most eloquent way I’ve ever heard it defined. Total crystallizing moment for me, even though I’ve used dynamic programming solutions before.

    • @TechWithNikola
      @TechWithNikola  Рік тому +12

      Thank you for the comment. I'm glad that you've found that reasoning useful :)

    • @Pseudo___
      @Pseudo___ Рік тому +13

      How was that not obvious? Given you’ve done it before

    • @tantalus_complex
      @tantalus_complex Рік тому +90

      ​@@Pseudo___ What are you expecting them to say?
      You're just insulting them while showing off your own ignorance about how learning works. Way to be a pointless buzzkill about someone's a-ha moment.
      Many explanations _don't_ reduce DP to such a simple, concise rule of thumb.
      So if a learner of DP didn't get lucky enough to benefit from one or more accessible explanations of the subject by educators, then _of course_ they will require time and experience before they'll be able to synthesize such an explanation for themselves.
      And by far, the most likely time for a simple explanation to finally "click" with any learner is _after_ they have already successfully developed a good _intuition_ for the subject.
      This person vocalized the joy of that final "click" - which is the glory of education itself. That's just awesome. It certainly does not warrant your unsolicited negativity.

    • @felipao2134
      @felipao2134 Рік тому

      ​@@tantalus_complexnice comment

    • @Vastaway
      @Vastaway 9 місяців тому +1

      i think the main problem is that recursion is too slow.
      tabulation combines memoization and greedy very well, and although runs in the same time complexity as recursive + memo solutions, generally take up less time and memory since recursive calls are expensive timewise and in the memory/stack

  • @miguelrosa6394
    @miguelrosa6394 9 місяців тому +31

    Dynamic programming is recursion + memoization, thank you so much I will never forget that connection, it made de concept click for me.

    • @TechWithNikola
      @TechWithNikola  8 місяців тому +2

      You’re welcome! Glad you’ve found it useful. That’s the key realization, and interestingly it took me a long time before I started phrasing it like that.

  • @JulenSanAlejoGonzalez
    @JulenSanAlejoGonzalez 11 місяців тому +311

    Dynamic programming -> Dont do what is already done

    • @coopergates9680
      @coopergates9680 Місяць тому +1

      This. I wish we didn't get asked to do so many coding 'exercises' that are already very well solved problems.
      If you're referring to not computing the same answer over and over, then true. Lol

  • @patrickstival6179
    @patrickstival6179 Рік тому +108

    I think I understand it after rewatching at least 20 times. Putting that knowledge into code is a whole other story. I'm going to search for some more dynamic programming problems

    • @TechWithNikola
      @TechWithNikola  Рік тому +28

      I think that’s normal if you really want to understand it. I think it took me many many attempts as well, and the only way to truly get good at it is to solve many problems.

    • @DrZainabYassin
      @DrZainabYassin 11 місяців тому

      Can I ask something

    • @wbrehaut
      @wbrehaut 11 місяців тому +11

      @@DrZainabYassinYou don't need permission to ask something. Just go ahead and ask!

    • @jst8922
      @jst8922 10 місяців тому +19

      Here are some classic dynamic programming problems:
      part I:
      Fibonacci Numbers:
      Problem: Compute the nth Fibonacci number.
      Dynamic Programming Approach: Use memoization or bottom-up tabulation to store and reuse previously computed Fibonacci numbers.
      Longest Common Subsequence (LCS):
      Problem: Given two sequences, find the length of the longest subsequence present in both of them.
      Dynamic Programming Approach: Build a table to store the lengths of LCS for different subproblems.
      Longest Increasing Subsequence (LIS):
      Problem: Given an unsorted array of integers, find the length of the longest increasing subsequence.
      Dynamic Programming Approach: Build a table to store the lengths of LIS for different subproblems.
      Knapsack Problem:
      Problem: Given a set of items, each with a weight and a value, determine the maximum value that can be obtained by selecting a subset of items with a total weight not exceeding a given limit.
      Dynamic Programming Approach: Create a table to store the maximum value for different subproblems.
      Coin Change Problem:
      Problem: Given a set of coin denominations and a total amount, find the number of ways to make the amount using any combination of coins.
      Dynamic Programming Approach: Build a table to store the number of ways to make change for different subproblems.
      Edit Distance:
      Problem: Given two strings, find the minimum number of operations (insertion, deletion, and substitution) required to convert one string into another.
      Dynamic Programming Approach: Build a table to store the minimum edit distance for different subproblems.
      Matrix Chain Multiplication:
      Problem: Given a sequence of matrices, find the most efficient way to multiply them.
      Dynamic Programming Approach: Build a table to store the minimum number of scalar multiplications needed for different subproblems.
      Subset Sum:
      Problem: Given a set of non-negative integers, determine if there is a subset that sums to a given target.
      Dynamic Programming Approach: Build a table to store whether a subset sum is possible for different subproblems.
      Rod Cutting Problem:
      Problem: Given a rod of length n and a table of prices for rod pieces of various lengths, find the maximum value obtainable by cutting the rod and selling the pieces.
      Dynamic Programming Approach: Build a table to store the maximum value for different subproblems.
      Maximum Subarray Sum:
      Problem: Given an array of integers, find the contiguous subarray with the largest sum.
      Dynamic Programming Approach: Keep track of the maximum subarray sum ending at each position in the array.
      part II:
      Palindrome Partitioning:
      Problem: Given a string, partition it into as many palindromic substrings as possible.
      Dynamic Programming Approach: Build a table to store the minimum number of cuts needed to partition substrings into palindromes.
      Word Break Problem:
      Problem: Given a string and a dictionary of words, determine if the string can be segmented into a space-separated sequence of dictionary words.
      Dynamic Programming Approach: Build a table to store whether a substring can be segmented using the given dictionary.
      Longest Palindromic Substring:
      Problem: Given a string, find the longest palindromic substring.
      Dynamic Programming Approach: Build a table to store whether substrings are palindromic.
      Count Distinct Subsequences:
      Problem: Given a string, count the number of distinct subsequences of it.
      Dynamic Programming Approach: Build a table to store the count of distinct subsequences for different subproblems.
      Maximum Sum Increasing Subsequence:
      Problem: Given an array of integers, find the maximum sum of increasing subsequence.
      Dynamic Programming Approach: Build a table to store the maximum sum of increasing subsequences for different subproblems.
      Largest Sum Rectangle in a 2D Matrix:
      Problem: Given a 2D matrix of integers, find the largest sum rectangle.
      Dynamic Programming Approach: Reduce the problem to finding the largest sum subarray in each column.
      Egg Dropping Problem:
      Problem: You are given k eggs and a building with n floors. Find the minimum number of drops needed to determine the critical floor from which eggs start to break.
      Dynamic Programming Approach: Build a table to store the minimum number of drops for different subproblems.
      Counting Paths in a Grid:
      Problem: Given a grid, find the number of unique paths from the top-left corner to the bottom-right corner.
      Dynamic Programming Approach: Build a table to store the number of paths for different positions in the grid.
      Wildcard Pattern Matching:
      Problem: Given a text and a wildcard pattern, implement wildcard pattern matching with '*' and '?'.
      Dynamic Programming Approach: Build a table to store whether substrings match for different subproblems.
      Minimum Cost Path in a Matrix:
      Problem: Given a 2D matrix with non-negative integers, find the minimum cost path from the top-left corner to the bottom-right corner.
      Dynamic Programming Approach: Build a table to store the minimum cost for different subproblems.
      part III:
      Distinct Paths in a Grid:
      Problem: Given a grid of m x n, find the number of unique paths from the top-left corner to the bottom-right corner, where movement is allowed only down or to the right.
      Dynamic Programming Approach: Build a table to store the number of unique paths for different positions in the grid.
      Count Palindromic Subsequences:
      Problem: Given a string, count the number of palindromic subsequences.
      Dynamic Programming Approach: Build a table to store the count of palindromic subsequences for different subproblems.
      Maximum Length Chain of Pairs:
      Problem: Given pairs of integers, find the length of the longest chain of pairs such that the second element of the pair is greater than the first element.
      Dynamic Programming Approach: Sort the pairs and apply a dynamic programming approach to find the longest chain.
      Longest Bitonic Subsequence:
      Problem: Given an array of integers, find the length of the longest bitonic subsequence. A bitonic subsequence is one that first increases and then decreases.
      Dynamic Programming Approach: Build tables to store the length of increasing and decreasing subsequences for different subproblems.
      Partition Equal Subset Sum:
      Problem: Given an array of positive integers, determine if it can be partitioned into two subsets with equal sum.
      Dynamic Programming Approach: Build a table to store whether a subset with a particular sum is possible.
      Maximum Product Subarray:
      Problem: Given an array of integers, find the contiguous subarray with the largest product.
      Dynamic Programming Approach: Keep track of both the maximum and minimum product subarrays at each position.
      Decode Ways:
      Problem: A message containing letters from A-Z can be encoded into numbers. Given a string, determine the number of ways it can be decoded.
      Dynamic Programming Approach: Build a table to store the number of ways to decode substrings for different subproblems.
      Shortest Common Supersequence:
      Problem: Given two strings, find the shortest string that has both strings as subsequences.
      Dynamic Programming Approach: Build a table to store the length of the shortest common supersequence for different subproblems.
      Maximum Profit in Stock Market:
      Problem: Given an array representing stock prices on different days, find the maximum profit that can be obtained by buying and selling stocks.
      Dynamic Programming Approach: Keep track of the minimum stock price and maximum profit at each position.

    • @ggmaddr
      @ggmaddr 9 місяців тому +5

      Huge respect for your perseverance. Learning complex topics are frustrating, just put hours, be calm and watch everything again

  • @AusReddit
    @AusReddit 6 місяців тому +10

    Fun fact, for the maze problem, you can also solve this with combinatorics. The key to note is that no matter the path, the number of right moves and down moves end up being the exact same. For the 18 x 6 grid example, you'd need to move 17 down, 5 right in total.
    Therefore, for 22 total moves, we pick 17 spaces for the down moves and leave the remaining for the right moves. C(22, 17) = 26334
    This is equivalent to picking 5 spaces for the right moves and leaving the remaining for the down moves. C(22, 5) = 26334

  • @bolafahmi
    @bolafahmi Місяць тому +2

    You just helped me out understand something I was really afraid of and thought it is black magic and a buzz word. Your explanation was really simple and made me enjoy learning it. Thanks a lot.

  • @asj9469
    @asj9469 9 місяців тому +6

    I appreciate this content so much. I personally find recursive functions less intuitive and more of a headache because I constantly have to keep track of what I just did. If I lose it? then yea I have to start over. Maybe there's a smarter way to approach them, but I haven't come across any intuitive explanations for the past 3 years I've been into coding.
    Your bottom-up approach makes so much more sense though. I just love how we are storing the subproblems for later use. It makes accessing previous information very intuitive and less of a headache for me. I can see the pattern there, and I believe I could apply it to other problems too if I could frame them right.
    Thank you so much!

    • @TechWithNikola
      @TechWithNikola  8 місяців тому +1

      Hey, thank you for the comment. Yeah, recursion can be tricky but it becomes easy over time. I will consider making some videos about recursion, and try to share my thought process.
      Bottom-up approach is a good way to improve, but be careful as it can’t replace recursion in many non-DP problems.

  • @driden1987
    @driden1987 Рік тому +13

    I remember solving the knapsack problem in C++ back when I was studying algorithms at the University.
    I found it so hard at first, but once it clicks it’s awesome

    • @TechWithNikola
      @TechWithNikola  Рік тому +4

      Indeed. I’ve had the same experience long time ago. I think knapsack was one of the first problems I learned about.

  • @divy1211
    @divy1211 Рік тому +233

    This has to be one of the best dynamic programming videos out there. Props!
    Something that I feel could have been mentioned though is that a lot of the times it is also useful to reframe the problem which can make the solution a lot more intuitive as well.
    Whilst I understand that the point of the last maze problem is to teach DP effectively (it is a good and intuitive example for how a bigger problem can be reframed in terms of sub problems!), a useful observation is that the distance which the rabbit needs to move in an N×M grid is always going to be N-1 rights and M-1 downs. The number of ways is just the total possible arrangements of that many rights and downs, which is given by (N+M-2)! / [(N-1)!(M-1)!]. DP is thus not required for the last problem at all!

    • @TechWithNikola
      @TechWithNikola  Рік тому +46

      Thanks a lot for the comment. Yeah, you are totally right. I was having back-and-forth thoughts about whether I should mention this, and decided against it because my focus was on explaining the intuition behind DP. Similarly, it is possible to compute N-th Fibonacci number in O(log(N)) time complexity and O(1) memory complexity. In retrospect, I should have mentioned both without affecting the explanation of the DP approach. I will remember this for future videos.
      Thanks again for the feedback. These types of comments are very useful.

    • @jonaskoelker
      @jonaskoelker Рік тому +20

      I guess the maze example could be made interesting by adding obstacles, such that counting the solutions without enumerating them becomes a lot harder.

    • @saveriov.p.7725
      @saveriov.p.7725 10 місяців тому

      Can you explain where you got this formula?

    • @arturosalas5399
      @arturosalas5399 9 місяців тому

      ​@@saveriov.p.7725
      The # of combinations of r unordered elements out of a total of n is;
      n!/(r! * (n-r)!)

    • @bizw
      @bizw 5 місяців тому

      ​@@saveriov.p.7725since you need m+n-2 moves total to reach the end, and there are m-1 rights and n-1 downs, you just need to calculate the amount of ways to place the m-1 rights into m+n-2 slots (the down moves will naturally take the slots not taken by right moves). This is m+n-2 choose m-1, which is his formula

  • @jimeshka-sr7ep
    @jimeshka-sr7ep Місяць тому +1

    This is the first video of your channel I'm watching and I am blown away by the quality of your explanation. Thank you so much Nikola!

  • @levelplusplus
    @levelplusplus Рік тому +202

    Tech with Nikola = Tequila 🍸🎉

  • @sophiophile
    @sophiophile 9 місяців тому +1

    Great video, the only thing I might recommend is adding a diagram explaining how `for i in range(1, m +1):` builds up the memo dict from the bottom up.

    • @TechWithNikola
      @TechWithNikola  9 місяців тому

      Thanks. Yeah, that would have been useful.

  • @SALOway
    @SALOway Рік тому +3

    I don't know how, but your video came across as timely as ever!
    I am a student and I was given a recursion problem. It may not have been necessary, but I'm sure glad that I was able to optimize recursion (which is so much slower than the iterative approach that already for the 50th element you have to wait more than 30 seconds, when the iterative approach manages in less than a second)

    • @TechWithNikola
      @TechWithNikola  Рік тому

      Thank you for the comment. I’m glad you’ve found it useful and that the timing was perfect! 😀

  • @treelibrarian7618
    @treelibrarian7618 Рік тому +5

    wonderful video! so well explained! I had a weird distorted view of what dynamic programming meant before this.
    while thinking about the maze problem it occurred that it's really a 2D Fibonacci series.
    also, only a 1d array is needed, reducing memory complexity and cache usage. you can even keep a running total for each column, initialized to 1, and forgo the initial setup (except for zeroing the array memory) halving the memory reads.
    in C looks like:
    int paths(int n, int m) {
    int t;
    if(m < n) { t = m; m = n; n = t; } // n rows is smaller dimension
    int memo[--n] = {0}; // reduce n by one since first row is handled by t = 1 below
    for(int i = 0; i < m; ++i) { // column counter
    t = 1; // first row always 1
    for(int j = 0; j < n; ++j) { // row counter
    t = (memo[j] += t); // do the work
    }
    }
    return t; // we know what the final value was already
    }
    /// asm version (linux calling convention)
    .paths:
    cmp rdi, rsi
    cmovlt rax, rsi
    cmovlt rsi, rdi
    cmovlt rdi, rax
    dec rsi
    mov rcx, rsi
    mov rdx, rsp ;; using stack space directly for array
    xor rax, rax
    .zloop
    mov [rdx], rax ;; memory clearing (using int64's here)
    sub rdx, 8
    sub rcx, 1
    jnz zloop
    .iloop
    mov rax, 1
    mov rcx, rsi
    mov rdx, rsp ;; reset running total, array address pointer and row count each column
    .jloop
    add rax, [rdx] ;; do the work
    mov [rdx], rax
    sub rdx, 8 ;; iterate
    sub rcx, 1
    jnz jloop ;; inner loop should execute once per clock cycle
    sub rdi, 1
    jnz iloop
    ret ;; result is already in rax, no registers needed to be spilled to stack

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Thank you! I’m so glad to hear that this video cleared some things up.
      Your observations are correct. This is why I like bottom up approach more because it allows you to do these kinds of optimizations.
      I like the ASM solution 😀

  • @systemloc
    @systemloc 10 місяців тому +5

    This video is epic. The way everything is explained makes it very easy to understand. The audio is super well recorded. The visuals are well done. If that's not enough, he explains the solution in code in both a recursive and not recursive fashion. I don't use recursion much, and I didn't really understand how to switch between recursive and non-recursive, and now I understand that better as a bonus. Instant subscribe. I hope you make lots of videos.

    • @TechWithNikola
      @TechWithNikola  9 місяців тому +3

      Thanks a lot for the comment. I'm very happy when I hear from people that find it useful and informative. I will definitely make lots more videos, and I hope I'll have more time in the future to upload more regularly.

  • @tkingless
    @tkingless Рік тому +2

    I knew the examples but not knowing what dynamic programming is until watching this nice explanation. It is like connecting broken pieces into a large one. look forwards to next part

    • @TechWithNikola
      @TechWithNikola  Рік тому

      Yeah, it's exactly like connecting broken pieces into a large one. I'm glad you've found the video useful :)

  • @Antonia-e8l4w
    @Antonia-e8l4w 11 місяців тому +5

    All the great performers I have worked with are fuelled by a personal dream.

  • @sazh1
    @sazh1 Рік тому +5

    There's a slight issue for Coin Problem: How many ways
    The given solution does not account for duplicates such as 1+1+2 and 1+2+1 (which together should count as 1 possible solution instead of 2)
    The example 5, [1,4,5] should have an output of 3:
    1+1+1+1+1
    1+4
    5
    But instead is 4 because it counts 4+1 as a separate solution to 1+4.
    An iterative solution that handles this would be for example:
    memo = defaultdict(int)
    memo[0] = 1
    for coin in coins:
    for i in range(coin, amount + 1):
    memo[i] += memo[i-coin]
    return memo[amount]

    • @TechWithNikola
      @TechWithNikola  Рік тому

      Hello, thanks for the comment.
      > The given solution does not account for duplicates such as 1+1+2 and 1+2+1 (which together should count as 1 possible solution instead of 2)
      Why should this count as one solution? Note that the problem definition explicitly says that these should count as 2 solutions, and 1+4 and 4+1 are explicitly called out as an example (see example at 14:04).

    • @sazh1
      @sazh1 Рік тому

      ​@@TechWithNikola oh ur right guess i just completely ignored that lol.
      If order matters then the given solution works.

    • @sazh1
      @sazh1 Рік тому +1

      The problem does become interesting if duplicates were to be excluded though.

  • @flaresaccount1754
    @flaresaccount1754 Місяць тому

    This video is fantastic and definitely helping my Intro to Algos grade, thanks!!!

  • @Janice-h1u
    @Janice-h1u 11 місяців тому

    Be thankful when you don't know something for it gives you the opportunity to learn.

  • @franepoljak9605
    @franepoljak9605 5 місяців тому +1

    This is by far the best explanation I've ever seen for dynamic prorgamming!

  • @TanigaDanae
    @TanigaDanae Рік тому +1

    Another solution for fibbonachi was the "sliding window" where you work your way up similar to the bottom up.
    Example in C
    size_t fib(size_t n)
    {
    size_t high = 1, low = 1, tmp;
    for (size_t i = 2; i < n; ++i)
    {
    // add
    low = high + low;
    // swap
    tmp = high; high = low; low = tmp;
    }
    return high;
    }
    Edit: I am not sure this is called dynamic programming.

    • @TechWithNikola
      @TechWithNikola  Рік тому +2

      Yup, this would work and use O(1) memory. This is the main advantage of the bottom-up approach because it makes it easy to reuse memory of memoized solutions that we don't need anymore.

  • @rodneymkay
    @rodneymkay Рік тому +18

    Really cool. Found this channel through the git video and started watching all the other videos. Really enjoying them so far, so thanks for making them

    • @TechWithNikola
      @TechWithNikola  Рік тому +2

      I'm so glad that you're enjoying my videos. Thanks for taking the time to leave a comment.

  • @justinv3512
    @justinv3512 9 місяців тому +5

    Whenever a technical concept is being explained by an eastern european accent, you know you just struck gold.

  • @temurson
    @temurson Рік тому +15

    Appreciate your video! I'm decent at algorithmic problems, and knew all of the basic dynamic problems you mentioned in this video, but I liked your emphasis on first writing a brute force solution and then optimizing it.
    I was a bit confused by your choice to always use bottom-up approach, because in my opinion bottom-up can sometimes be difficult to write as it can be hard to determine which states to update first. I've always found the recursive approach more intuitive, but maybe it's just me.
    Looking forward to part 2! Hope you can help me find some intuition to solve harder DP problems as those are still pretty difficult for me.

    • @TechWithNikola
      @TechWithNikola  Рік тому +2

      Thank you for the comment. Always using bottom-up approach is a personal preference. The main advantage is that it's easier to reuse memory for memoized solutions when we they are not required anymore. For example, to solve fibonacci with DP, we don't have to use the O(N) memory. Instead, we can only store the last two values. This is very easy to do when using the bottom-up approach, but it's not obvious to me how to do it with the recursive one. Hopefully that makes sense.
      You are right that the recursive approach is more intuitive and I think it's a perfectly fine approach in most cases.

    • @nikhilsinha2191
      @nikhilsinha2191 11 місяців тому

      same buddy I too found the recursive approach to be more intuitive but generally looping solutions perform better than recursion as recursion uses the concept of stack behind the sence.

  • @danielcristianvasilescu6025
    @danielcristianvasilescu6025 2 місяці тому

    You are something special when it comes to explaining complex stuff, with good examples and with good animations and presentation in general. I really hope to see more videos from you. Already subscribed.

  • @KablammoManYT
    @KablammoManYT 11 місяців тому +2

    I love how you could just solve the maze one using combinations but you did it recursively anyway:
    import math
    def grid_paths(m,n):
    return math.factorial(m+n-2)/(math.factorial(m-1)*math.factorial(n-1))

    • @TechWithNikola
      @TechWithNikola  11 місяців тому +2

      Correct. The intent was to show ideas behind dynamic programming though. Other problems can also be solved differently, and in better time complexity - for example, fibonacci can be solved in O(lg N).
      One thing to keep in mind is that the dp approach can easily be extended to solve the maze problem with obstacles or other constraints.

    • @rotemlv
      @rotemlv 29 днів тому +1

      Just noting that if, for some reason, you would want to use this implementation with a larger grid (500,500 or so), you'll need to floordiv (edited since I confused this name with idiv again):
      return math.factorial(m+n-2)//(math.factorial(m-1)*math.factorial(n-1))
      Otherwise it floats away.

    • @KablammoManYT
      @KablammoManYT 29 днів тому

      @@rotemlv oh yeah, I completely forgot about the fact that it would have errors like this as the numbers get larger.

  • @pubringjuandelacruz
    @pubringjuandelacruz Рік тому +9

    thank you for creating videos like this. I've been in the industry for a couple of decades now and still learning new things and approaches. more power to you sir. and looking forward to more videos.

    • @CapeSkill
      @CapeSkill Рік тому

      @@aqfj5zy what are you waffling about, blud.

  • @JVLawnDarts
    @JVLawnDarts 6 місяців тому

    Watching your videos truly helped me better understand this for my final. Thanks brother, here’s hoping I can graduate

    • @TechWithNikola
      @TechWithNikola  6 місяців тому

      Good luck! I’m glad my videos were helpful

  • @JustSomeYTChannel557
    @JustSomeYTChannel557 Рік тому +1

    A little late for the party but I got a question:
    In the bottom up solution of the minimum coins problem (13:00 min) you iterate over every number from (i to m+1).
    I just wanted to confirm that the bottom up solution can actually be less efficient in case where a lot of iterations are not possible?
    For example, if m=25 and the coins = { 5 }.
    The top down solution will simply start from 25 > 20 >15 > 10 > 5 > 0 > finish
    The bottom up solution will iterate from 1 > 25 (25 times) and for anything that's not multiplication of 5 will simply `continue`.
    I think it's an important distinction as the complexity here can matter here a lot.

    • @TechWithNikola
      @TechWithNikola  Рік тому +2

      Yes, that's technically true. If you have one coin you could also just divide N with the coin value if possible to get the solution even quicker. However, most sums are possible with a very few coins. While you can optimize for the corner case, other (possibly more common) cases would run slower. At the end of the day, the important question is what's the common scenario that you should be optimizing for, and that's often answered by the business needs.

  • @maheshchoudary8763
    @maheshchoudary8763 Рік тому

    Without this video I wouldn't have gotten the power of both recursion and memorization. Thanks

    • @TechWithNikola
      @TechWithNikola  Рік тому

      That’s great to hear. You’re welcome!

    • @gJonii
      @gJonii Рік тому +2

      Memoization you mean?

  • @priyankthakkar6597
    @priyankthakkar6597 5 місяців тому

    Wow.... A great video to start with dynamic programming. It is like brain is forced to think after a long time

  • @madhavanand756
    @madhavanand756 8 місяців тому +1

    This is masterpiece, we want more content like this.

  • @littledragonwolf
    @littledragonwolf 10 місяців тому

    Very good informative video. Thank you.
    Third problem can also be solved analytically (x+y-2) choose (x-1).

    • @TechWithNikola
      @TechWithNikola  9 місяців тому

      You're welcome, and thank you for the comment. Yup, the third problem can be solved with simple combinatorics. It's still useful to learn DP in that way, because the problem can easily change to e.g. have obstacles in the board, and combinatorics don't work all of a sudden. :)

  • @narasimhasudhakar2470
    @narasimhasudhakar2470 9 місяців тому

    You hit this to out of the park...plz plz keep teaching...

    • @TechWithNikola
      @TechWithNikola  8 місяців тому +1

      Glad you’ve liked it. I will. Comments like yours always remind me that it’s worth making more videos

  • @logicmatthewlearning
    @logicmatthewlearning 3 місяці тому +1

    Fun fact: Maze problem is just choosing from (m-1) + (n-1) moves, when to move down or right, so this is binomial coefficient m+n-2 choose n-1 or m+n-2 choose m-1, but this the same ;)

  • @SRI-PRIYAN
    @SRI-PRIYAN 11 місяців тому +3

    15:22 If you think for some time, you can arrive at the formula C(N + M - 2, N - 1) where C is the combinatorics function.
    Explanation:
    1. The Rabbit has to take N - 1 steps down and M - 1 steps right to reach the bottom right cell.
    2. So a total of (N - 1) + (M - 1) = N + M - 2 steps
    3. From these N + M - 2 steps, we can "Choose" N - 1 steps that will be the downward steps. The remaining steps will be towards the right. Hence the formula C(N + M - 2, N - 1)
    4. By symmetry, C(N + M - 2, M - 1) is also correct and will give the same answer.
    Note: C(n, r) = C(n, n - r)

    • @TechWithNikola
      @TechWithNikola  11 місяців тому

      Thank you for taking the time to write this. Yes, that’s correct. There have been a few mentions of this and I wrote the comment explaining how to do it. Maybe I should pin that given that almost all problems in this video have an alternative solution.

  • @AbdelhakRazi
    @AbdelhakRazi 4 місяці тому

    this was quite a unique way to explain dynamic programming !

  • @Divinxu
    @Divinxu 4 місяці тому

    Great explanation, and the youtube comments helped it sink in some more, so thanks everyone!

  • @________.pathfinder
    @________.pathfinder Рік тому

    Amazing, I think everyone should check it first before start learning DSA

  • @ride_like_bat
    @ride_like_bat Рік тому

    Thanks, I have recently started learning DP. Your illustrations helped me.

  • @pet.me102
    @pet.me102 6 місяців тому

    Not gonna lie, I have watched a few videos about dynamic programming, but your video is the only one that I love the most and understand because it has examples lol. Keep up the good work bro, I already subscribed.

  • @Gemma-e5b
    @Gemma-e5b 11 місяців тому

    The smallest act of kindness is worth more than the grandest intention.

  • @Clark-u7m
    @Clark-u7m 11 місяців тому

    Dan took the deep dive down the rabbit hole.

  • @entity5678
    @entity5678 4 місяці тому

    best video on DP so far

  • @Jamie-q4p
    @Jamie-q4p 11 місяців тому +1

    This world, after all our science and sciences, is still a miracle; wonderful, inscrutable, magical and more, to whosoever will think of it.

  • @davitjanashia9344
    @davitjanashia9344 2 місяці тому

    You have very good aesthetic taste. Your graphics is very cool, good taste 😎💪🏻

  • @ismailfateen
    @ismailfateen 8 місяців тому +1

    Some things I have the urge to say:
    Fibonacci can be solved in logarithmic time with binary matrix exponentiation, minimum coins problem can be solved greedily in some cases, maze problem can actually be solved in linear time! it is just equal to (n + m - 2) choose (n - 1), one way to force the DP solution(s) [yes, there are 2!] is to add obstacles.

    • @TechWithNikola
      @TechWithNikola  7 місяців тому +1

      Yeah, that’s correct. I already mentioned these solutions in other comments. However, the goal here was to understand ideas in DP on simple problems, so I didn’t want to add extra information that may confuse newcomers.

  • @UliTroyo
    @UliTroyo Рік тому +1

    This is SUCH a good video. Thanks so much! I did the minimum coins problem in Nushell to try to get the concept to stick; it was fun.

  • @AP_Aman33
    @AP_Aman33 Рік тому +3

    Thanks for the video that was very well explained especially coin change problem animation 👍👍

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Thanks. I’m so glad you’ve liked it! 😀

  • @leeamraa
    @leeamraa 7 місяців тому

    You are a gifted teacher!!! Great video ... please keep doing what you are doing.👍👍👍

    • @TechWithNikola
      @TechWithNikola  7 місяців тому +1

      Thanks a lot for the kind words. I really hope I’ll get more free time to make videos soon.

  • @yonathanashebir6324
    @yonathanashebir6324 7 місяців тому

    Great video!, but anybody else noticed the answer to the maze problem is just (rows+cols)!/(rows!*cols!). (Note: I am not saying all the solutions here should be optimal, in fact there are others which are not. They just have to utilize DP somehow!)

    • @TechWithNikola
      @TechWithNikola  7 місяців тому

      Thanks. Yeah, fibonacci can be more optimal as well, but my goal was to apply DP and learn the ideas on these simple problems, since starting with more complicated ones would make it harder for newcomers.

  • @commarchinin
    @commarchinin Рік тому

    Edit: wanted to be clear that I really enjoyed the video, which I did! Initally overlooked saying that in the nature of function calls rabbit hole.
    Disclaimer: if the 'naïve' recursion approach is 'naïve', it's only so because the compiler you're using is trash. Good compilers do tail call elimination!
    After all, function calls having stack overhead is a high level language implementation choice, not absolute truth. For example, Forth function calls (and recursion) have no stack effects.
    Fundamentally, a recursive call can always be transformed to a jump instruction rather than a call instruction. And if that doesn't happen in your toolchain, it is your compiler's fault and you should desire better.
    And I know a common response to this is, 'but... then my stack trace won't show recursive calls (looking at you, Cadence SKILL).' And no, it won't. But like, the stack trace doesn't show loops either, and we're just fine with it. It's really a non-issue.
    P.S.:
    Extract out memoization from the functions and make it a higher order primative! Kinda hurts that you're doing it by hand.

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Thanks, I'm glad you've enjoyed the video.
      Regarding the tail recursion part, I don't think your statement is entirely accurate. I wouldn't say that function calls having stack overhead is a language implementation choice. You can only apply tailcall optimization if the function call is the last instruction in the recursive function.
      For example, what would the assemebly instructions look like for the following code (some arbitrary function):
      int f(int n) {
      if (n == 0) {
      return 0;
      }
      int k = 3 + f(n - 1) * 2;
      return k + f(n - 2);
      }
      My claim is that the f(n-1) call cannot be optimized as a jump instruction. It would have to be a call instruction because the program would have to remember the next instruction to process, and the state of local variables of f.
      Am I missing something?

  • @mikmad97
    @mikmad97 Рік тому +30

    Hey, Nikola!
    This was a really useful video, as we're currently going through DP on my Master's. It's such a difficult paradigm to get used to, but your video really helped understanding the paradigm, and hopefully it'll help my future self learning the DP paradigm :)
    Also, any idea when the next part is coming out? :D

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Hello Mikkel, I'm glad you've enjoyed it. Indeed, it takes to get used to the paradigm, but it will happen - just keep practicing. I don't have the exact date for part 2, but a possible timeline is in a month or two. Do you have any suggestions on what you'd like to see in part 2?

    • @DanielTysonM
      @DanielTysonM Рік тому

      I would like to know whether DP is restricted to counting and shortest path problems. I would also like to know how one can solve for an optimal order, if that makes sense? 😊

    • @mikmad97
      @mikmad97 Рік тому +1

      @@TechWithNikola I noticed you didn't do much recursion here, and I'd like to see Recursion done using the DP paradigm (I noticed another comment saying something along the same lines). Other than that, no, I don't have any specific topics as of now, that I'd like to see.
      I might post a new comment as my Master's course progress, but for now recursion done by DP is the only thing :)

  • @williamnilsson4344
    @williamnilsson4344 5 місяців тому

    I understood the concept but I wish I could follow along with your coding of the solutions, it was hard coming from Java

  • @kfawell
    @kfawell 6 місяців тому

    That was a well produced video. Of course he pointed out the important concept of dependencies. I am curious, why did you choose to have arrows pointing the opposite way to convention? To be clear, the conventional way is to point towards the dependencies.

    • @TechWithNikola
      @TechWithNikola  6 місяців тому +1

      Thank you :-)
      If you are referring to the topological sort order part, my reasoning was that sorting the graph in the topological order means that node u comes before node v if there is an edge from u to v.
      I probably used this article at the time:
      en.m.wikipedia.org/wiki/Topological_sorting

  • @hyqhyp
    @hyqhyp Рік тому +3

    Part 2 please!

    • @TechWithNikola
      @TechWithNikola  9 місяців тому

      Part 2 is here: ua-cam.com/video/rE5h11FwiVw/v-deo.htmlsi=_dT9c6EOSpU_xa4A
      It's not as popular as part 1, but I do hope that it matches your expectations. Let me know! Any feedback is welcome.

  • @rorauser
    @rorauser Рік тому +1

    This was extremely useful and is one of the better dynamic programming explanations I've seen.

  • @diegokarabin2912
    @diegokarabin2912 11 місяців тому

    Please make part 2!! This video was amazing. Helped me to understand the dynamic programming.

    • @TechWithNikola
      @TechWithNikola  11 місяців тому

      Thank you taking the time to comment. I'm glad to hear that you've found it useful.
      I have just published the part 2: ua-cam.com/video/rE5h11FwiVw/v-deo.html
      Looking forward to hearing what you think about it. I've tried to keep the same style, but I was moving forward quicker this time, and I don't know if that's appropriate - any feedback is welcome.

  • @jonaskoelker
    @jonaskoelker Рік тому

    There's one insight I've had about bottom-up vs. top-down dynamic programming which I've had recently.
    To be concrete, let's say the top-down solution builds a memoization by a recursive function first checking whether the answer to the current call is memoized, then computing and storing it if not. The computation is done by the function recursively calling itself with smaller arguments.
    On the other hand, a bottom-up solution builds the same table by calling the function with the smallest arguments first, then larger ones, in such a way that the function is never called twice with the same arguments, and when the function is called the table never has the answer already (for the arguments of that call).
    I would generally expect the bottom-up solution to be faster: the overhead of checking whether the table is full has been removed, as has the redundant memoized recursive calls-they are replaced by an unconditional table lookup. The major exception is if the top-down solution can fill out the table sparsely: then it can skip the work needed to fill out unnecessary table entries, and automatically does so. [But if that's possible, can't the bottom-up approach do the same, after a bit of thinking?]

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Great summary. I agree with all your points.
      > But if that's possible, can't the bottom-up approach do the same, after a bit of thinking?
      It's difficult to track which problem to solve next with the bottom-up approach. We need a topological sort of the problems, which may be simple for some problems, but hard to generalize. If the problem space is sparse, then there's a good chance that we can solve the problem without DP.

    • @jonaskoelker
      @jonaskoelker Рік тому

      ​@@TechWithNikola > Great summary
      Thanks.
      > It's difficult to track which problem to solve next with the bottom-up approach. We need a Topological sort [...].
      Yes, that's a good point! The bottom-up approach walks the argument/subproblem space in topological order, from sink-most to source-most.
      The the top-down approach simply walks all the out-edges from the current grid cell. Checking for cached results enables us to ignore the topological ordering and discover enough information about it on the go.
      > If the problem space is sparse, then there's a good chance that we can solve the problem without DP.
      Hm. Interesting. Hm. This is not obvious to me. Can you illustrate this principle with a few examples?

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      ​@@jonaskoelker
      > Can you illustrate this principle with a few examples?
      I'm just saying this from intuition, so take it with a grain of salt. I can't provide an example to generalize my statement because every problem is unique. The main reason is that I find it hard to imagine an interesting DP-like problem with the sparse solution space. It would be easier to think about this if you can find a problem, then we can try to solve it and maybe draw some conclusions.
      I attempted to phrase some of these problems so that the solution space is sparse, but I failed to come up with anything interesting. E.g. the minimum coin problem with sparse solution space would be something like "coin values are very large". Then instead of iterating over each solution with a for loop, we would compute possible solutions and add them to the queue (like a BFS algorithm maybe?). But this is just a direct way of coming up with the topological ordering that is specific to the problem.
      Sorry but I don't have a better answer at this point. I'll comment if I can think of anything interesting.

    • @jonaskoelker
      @jonaskoelker Рік тому +1

      @@TechWithNikola I see.
      I figure the applicability of DP has to do with reusing solutions to subproblems. Of course, just because a DP solution exists doesn't mean a non-DP solution doesn't exist.
      I guess in some sense there is _always_ (or almost always) a non-DP solution: brute force.
      So maybe the thing to look for is the kind of problem where DP works but is not the simplest solution.
      A leetcode example comes to mind: sliding window maximum. Given an array A of length n and a window size k, return `[max(a[i:i+k]) for i in range(approximately n - k + 1)]`.
      Given the maximum across a window of length (k-1) we can easily compute the maximum across a window of size k if it contains the smaller window. So DP applies.
      But it takes quadratic space (and thus time), and there is a solution which only takes linear space and time. (My first such solution involves rot13(pnegrvna gerrf).)

    • @jonaskoelker
      @jonaskoelker Рік тому

      Oh, I should add: the sliding window maximum problem is relatively easy because we're computing the maximum for all windows simultaneously.
      There's a highly fancy data structure that uses linear space and lets us compute range maxima (or minima) in O(1) time per query, without the queries being batched.
      Erik DeMaine talks about it in one of his "Advanced Data Structures" lectures on OCW. It's on youtube. He talks in terms of RMQ: range _minimum_ queries. It turns out to relate to least common ancestor queries, so maybe looking for a lecture on trees will help you find it.
      [But no one will independently rediscover this in the context of a leet code problem. The batched window maximum problems seems of appropriate difficulty relative to the kind of leet code problems I'm familiar with.]

  • @Travelophile
    @Travelophile 7 місяців тому

    best dynamic programming video i have ever seen

  • @begula_chan
    @begula_chan 8 місяців тому

    Thank you very much! It helped me understand basic concepts of DP a lot! You're a very cool guy.

    • @TechWithNikola
      @TechWithNikola  7 місяців тому

      I’m glad to hear that. Thank you for the kind words

  • @dozie_001
    @dozie_001 Рік тому +1

    Thanks so much. Will appreciate a part 2. I just subscribed!

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Welcome aboard! Next video will be dynamic programming part 2. Apologies for the delay, I've been sick and didn't have enough time to make a new video.

  • @kevintonbong3838
    @kevintonbong3838 10 місяців тому

    Thank you for this usefull and comprehensive way to explain the concept of dynamic programming. That's a realy good video, I watched it many times.

    • @TechWithNikola
      @TechWithNikola  9 місяців тому +1

      You're welcome Kevin. Thank you for taking the time to comment. I really appreciate it!

  • @Emily-f4h
    @Emily-f4h 11 місяців тому

    It's important to know that words don't move mountains. Work, exacting work moves mountains.

  • @uuuummm9
    @uuuummm9 Рік тому

    I am going to try to solve my first dynamic programming problem on leetcode after this video. Let's see how it goes! 😄

    • @TechWithNikola
      @TechWithNikola  Рік тому

      Nice :-) Good luck. Let me know how it goes!

    • @uuuummm9
      @uuuummm9 Рік тому

      @@TechWithNikola it was actually two-pointers problem 😆

  • @ronaldmaldis4394
    @ronaldmaldis4394 10 місяців тому

    I avided to watch like this video throughout the YT til found your video now.
    thank you buddy .
    keep up♥

    • @TechWithNikola
      @TechWithNikola  9 місяців тому

      You're welcome :) Glad to hear you've enjoyed it.

  • @Tuaregoxffff
    @Tuaregoxffff 5 місяців тому +1

    Your memoization Fibonacci code is not optimal. `memo` dict will be of size `n` while 2 elements array is all one needs

  • @shloksuman8164
    @shloksuman8164 Рік тому

    really crystal clear explanation on dynamic programming! came here after your git video.
    Subscribed!!

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Thanks a lot. I’m very happy to hear that you’ve liked both videos!

  • @harikirankante883
    @harikirankante883 Рік тому +3

    I would love to see more such quality content on YT ❤❤

  • @aayushtheapple
    @aayushtheapple 10 місяців тому

    Great video! starting my DP journey from this one.

    • @TechWithNikola
      @TechWithNikola  10 місяців тому

      Thank you! Good luck. Feel free to ask here if you need any help :)

  • @AlexanderBukh
    @AlexanderBukh 4 місяці тому

    Beltmatic game made me looking for thos video, im familiear with basic concepts and already wrote an almost naive python script. But it is slow, factoring an 8 digit prime took about 4 minutes, single thread. This video gave me more ideas to improve it.

  • @legend644
    @legend644 Рік тому

    Cool thing about the maze problem I just figured out
    You can imagine the path as a set of vectors, each either right or down
    Because the sum of vectors is blind to order as normal addition, every solution is simply a specific order for the vectors to be in
    Basically, we can start a loop at max(n, m), building a product up to (n-1)*(m-1) (as that would be the size of our set)
    Then simply divide by min
    This is the high school formula n!/(a!b!) in action because the order of rights/tops relative to each other still count as a similar solution

    • @TechWithNikola
      @TechWithNikola  Рік тому

      Hello. Yeah, that idea seems right (i haven't checked the formula though). It's definitely possible to solve this using simple combinatorics. You should be able to compute the number of ways to get to the bottom-right corner by rephrasing the problem as in how many ways can we choose [M-1] right (or [N-1] down) turns for the rabbit out of (N+M-2) turns, then use en.wikipedia.org/wiki/Binomial_coefficient formula to compute the solution: ([N+M-2] choose [M-1]). For example, the bigger grid in the video can be computed as (92 choose 18) (see www.wolframalpha.com/input?i=92+choose+18)

    • @legend644
      @legend644 Рік тому

      @@TechWithNikola thanks! I completely forgot the formula has a name

  • @MauroLima-b4n
    @MauroLima-b4n Рік тому

    Hope this channel grows a lot.
    I'm on mu first steps on coding, but I know it's gold.

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Thank you!
      Consider sharing this channel with your friends. It would help with the growth a lot 😊

  • @igorbeaver4692
    @igorbeaver4692 Рік тому

    Very looking forward to the 2nd part!

  • @brokecoder
    @brokecoder 6 місяців тому

    I think to only get good a DP, is to start thinking about problems that you tend to solve without DP, from the angle of DP.

  • @tarekjrd75
    @tarekjrd75 Рік тому +1

    Thank you for this awsome content. So in a nutshell by saying "dynamic programming ", that's meen benefiting from computer storage or memory capacity to solve hard problems (NP hard) that are to complex to solve in terms of times ?

    • @TechWithNikola
      @TechWithNikola  Рік тому

      You're welcome!
      You could maybe look at it like that, but I wouldn't use that exact phrasing to describe DP. I think the general idea when we say DP is "solve small independent problems and use them to solve bigger problems". You're right that we are using memory to store solution for each sub-problem, and we have to (with the exception for solutions that are not required anymore).

  • @apolonn
    @apolonn 2 місяці тому

    if Im not wrong, but I can imagine that you can solve the maze problem with binomial coefficient. using axb as a grid and m×n as a goal, the solution should be binomial coefficient of (a-m;b-n)

  • @yzr5089
    @yzr5089 Рік тому +1

    Hey, I almost never comment, but your video is really amazing. After watching it and refreshing my brain about DP, I managed to solve a new medium DP question on Leetcode without help!! (DP is one of my weaknesses lol)
    I’m looking forward to watching your future video on harder DP questions!

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Hey, thanks a lot for taking the time to comment. I’m glad you’ve found it useful!

    • @jst8922
      @jst8922 10 місяців тому

      Which Leetcode problem it was ?

  • @garvgupta8030
    @garvgupta8030 Рік тому

    Absolutely brilliant. Can you share some resources regarding this video

    • @TechWithNikola
      @TechWithNikola  Рік тому

      Thank you. Sure, I’ve used some ideas from:
      - cses.fi/book/book.pdf
      - Also, dms.rs/wp-content/uploads/2016/12/Uvod-u-dinamicko-programiranje.pdf (unfortunately this is not in English)

    • @garvgupta8030
      @garvgupta8030 Рік тому

      @@TechWithNikola Thank you so much💯

  • @anikettiratkar
    @anikettiratkar Рік тому

    Great explanation and visuals! I got very high hopes for this channel.

  • @manibhushankumarsingh5196
    @manibhushankumarsingh5196 6 місяців тому

    00:04 Dynamic programming combines the correctness of brute force and the efficiency of greedy algorithms
    02:15 The Fibonacci function is slow with a time complexity of the golden ratio to the nth power.
    04:23 Dynamic programming is a technique to improve the efficiency of recursive algorithms by remembering previously computed values.
    06:30 Dynamic programming problems can be solved from the bottom up for efficiency
    08:45 Using dynamic programming to solve the minimum number of coins problem
    10:58 Dynamic programming can be used to find the minimum number of coins needed to reach a target sum.
    13:12 The bottom-up approach is preferred due to lower constant factor and shorter length.
    15:36 The rabbit can reach the bottom right corner of a grid by only moving down or right.
    17:37 Mastering Dynamic Programming

  • @robbybobbyhobbies
    @robbybobbyhobbies Рік тому

    Fascinating, looking forward to future dynamic programming videos.

  • @N9TheNoob
    @N9TheNoob Рік тому

    what a blessing of a channel, thanks!

  • @flippert0
    @flippert0 10 місяців тому

    Very clear presentation! One question: doesn't DP also involve backtracking? Will you tackle this in a later presentation?

    • @TechWithNikola
      @TechWithNikola  9 місяців тому

      Thank you. Do you have some examples of what you mean?
      DP is a kind of brute-force because it solves every subproblem, but the key principle is that the subproblems are overlapping. On the other hand, backtracking is also similar to brute-force, but the idea there is to try and prune a lot of potential states as early as possible. So, while they are similar, I wouldn't say DP involves backtracking if I'm trying to be precise.

  • @jilherme
    @jilherme 10 місяців тому

    very easy to follow and entertaining!!

  • @JavierFausLlopis
    @JavierFausLlopis 10 місяців тому

    Great explanation, thanks a lot for sharing. For us educators this is at least inspirational.

    • @TechWithNikola
      @TechWithNikola  10 місяців тому

      Thank you the comment. I’m glad you like the content 😀

  • @Ajqualix
    @Ajqualix 4 місяці тому

    Althought the python interpreter does not implement tail call optimization (TCO), it is generally considered good practice in recursive implementations.

  • @tahawahiddaw6259
    @tahawahiddaw6259 5 місяців тому

    every second of the video is worth watching, please keep bringing up such videos and thank you so much for explaining the DP in such remarkable way

  • @khaledhamoul1403
    @khaledhamoul1403 11 місяців тому

    Awesome explanation, thank you very much

    • @TechWithNikola
      @TechWithNikola  11 місяців тому +1

      Thank you. Glad you’ve enjoyed it!

  • @nicolaslupi3111
    @nicolaslupi3111 Рік тому

    Gold. Can't wait for the second part

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Thanks. What would you like to see in part 2?
      Some options:
      - How to reconstruct the path that leads to optimal solution (e.g. which coins to choose for minimum number of coons)
      - more advanced dp problems
      - anything else

    • @nicolaslupi3111
      @nicolaslupi3111 Рік тому

      More advanced problems would be awesome. For example I was trying to solve a problem where you need to find the shortest path from one corner of a grid to another. Some cells have walls and you can only cross one of them (or k in the general case). Also you can only move up, down, left or right. Your video helped me a lot trying to solve this using DP but I'm still having trouble, specially trying to think the bottom up approach.

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      @@nicolaslupi3111thanks for suggestion. I’m glad the video helped. I’ll think of some advanced problems to cover in the next video. I may continue this series 1 problem per part from now on, to get videos out sooner.

  • @Website_TV_1
    @Website_TV_1 Місяць тому

    great, thanks for your lesson

  • @musicD.looped
    @musicD.looped Рік тому

    great video, very precise , concise and explanatory
    waiting for more, keep it up!!!

  • @Elaine-d4v
    @Elaine-d4v 11 місяців тому

    Small opportunities are often the beginning of great enterprises.

  • @tim3.1415
    @tim3.1415 Рік тому

    Unbelievably well done video👍👍 you deserve many more subscribers!!

  • @potodds_trading
    @potodds_trading Рік тому

    Excellent presentation. Title says Part 1. Will there be more content?

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Thanks. Yes, there will be part 2, but it takes a lot of time to make these videos, so I don't know the exact date yet.

  • @naveennayak8365
    @naveennayak8365 6 місяців тому

    Those who can't remember the past are condemned to repeat it .
    - Dynamic Programming.

  • @zendr0
    @zendr0 Рік тому +1

    Great video Nikola. I would love to see more of these in future.
    Once again, Thank you for this 😊

    • @TechWithNikola
      @TechWithNikola  Рік тому +1

      Thank you 🙂 You’re welcome. Looking forward to hear your thoughts for my future videos.

  • @TylerPerlman
    @TylerPerlman Рік тому +1

    As other people have mentioned, for some of the questions, there are ways to do direct mathematical computations without the use of Dynamic Programming, and I thought I would share such a method for the "How Many Ways?" coins problem. Specifically, I will utilize combinatorics (generating functions).
    One caveat: This method does not distinguish between the order in which the coins are picked. That being said, there would also be ways (albeit slightly more complicated) to do that with combinatorics as well.
    This is the problem of counting the number of partitions of a number n into "parts" of only certain sizes. If we say that the only sizes we allow (the values of each coin) are the elements a in some set A = {a1, a2, ... }, then the solution to this problem will be given by the coefficient in front of the x^n term of the power series expansion of the function:
    f(x) = product over all a in A of 1/(1-x^a) . This is equivalent to (1 + x^a1 + x^(2*a1) + ....)*.....*(1 + x^ak + x^2*ak + .....) where we can drop any terms where the power of x exceeds n.
    So, for example, in the case of coins of sizes 1, 4, and 5:
    A = {1, 4, 5}; n = 13
    f(x) = 1/((1-x)(1-x^4)(1-x^5)) = (1+x+x^2+x^3+...+x^13)*(1+x^4+x^8+x^12)*(1+x^5+x^10) = 1 + x + x^2 + x^3 + 2*x^4 + ... + 16*x^20 + ....
    This tells us that there are 16 distinct ways of partitioning 20 cents into coins of sizes 1, 4, 5 as long as we don't care about order.
    In practice, naively using of a FFT-based algorithm to actually expand the polynomials, the multiplication of K polynomials with total degree D is O(D*log(D)*log(K)). In this specific problem, K is the number of coins and D is given by n + floor(n/a1) + floor(n/a2) + ... + floor(n/ak). If for some reason you had coins of every size, an upper bound for this could be as bad as
    O(N^2*log(N)^2), but for small numbers of coins, this method could be beneficial.
    A better way in practice could potentially be to use an automatic differentiation algorithm to compute the n'th derivative of the above function and the simply evaluate this at zero and divide by n!, but I do not know enough about such algorithms to actually give the time complexity reliably.

    • @TechWithNikola
      @TechWithNikola  Рік тому +2

      That’s very interesting. It’s always nice to learn about new ways to solve a problem, so thank you for taking the time to write this in a comment. It’s appreciated!

  • @ArchonLicht
    @ArchonLicht Рік тому

    Very nice and clear video. Thank you!