Sunday, May 04, 2008

More Coin Changers in Scala - Recursive, Memoized and Dynamic

In my last post I implemented the greedy algorithm for coin changer in Scala. While greedy algorithms provide optimal solutions for some combinations of coin denominations, it does not offer correct optimal solution for all combinations. In fact, coin changing problem is a well discussed example of a dynamic programming algorithm that exhibits the optimal substructure property, where the optimal solution for the problem can be implemented in terms of optimal solutions of smaller subproblems. Just as a refresher course in basic algorithms and keeping up the spirit of my learning Scala, I thought it might be a good idea to try to implement this in Scala.

One of the approaches to solving the problem is by defining the subproblems through a recurrence relation and implementing a top down recursive algorithm like the following :


object CoinChanger {
  def changeRecursive(denominations: List[Int], amount: Int): (Int, List[Int]) = amount match {
    case 0 => (0, List())
    case a =>
      val ch = ((java.lang.Integer.MAX_VALUE, List[Int](), 0) /: denominations)
        {
          (x, y) =>
            if (amount >= y) {
              lazy val ret = changeRecursive(denominations, amount - y)
              if (ret._1 + 1 < x._1) {
                (ret._1 + 1, ret._2, y)
              } else x
            } else x
        }
      (ch._1, ch._3 :: ch._2)
  }
}



It is a top down approach of solving the problem but has the obvious drawback of naive recursive implementations. Same subproblems are being solved repeatedly, leading to an exponential complexity of O(M exp d) for computing the optimal combination for M amount with d denominations.

Using memoization, we can speed up the implementation significantly, while keeping the current top down approach. The following implementation uses the memo functions from Scalaz :


object CoinChanger {
  def changeRecursiveMemoized(denominations: List[Int], amount: Int): (Int, List[Int]) = {
    val m = arrayMemo[(Int, List[Int])](amount)
    def change(a: Int): (Int, List[Int]) = a match {
      case 0 => (0, List())
      case x =>
        val ch = ((java.lang.Integer.MAX_VALUE, List[Int](), 0) /: denominations)
          {
            (x, y) =>
              if (>= y) {
                lazy val ret = m(change)(- y);
                if (ret._1 + 1 < x._1) {
                  (ret._1 + 1, ret._2, y)
                } else x
              } else x
          }
        (ch._1, ch._3 :: ch._2)
    }
    change(amount)
  }
}



Problems that exhibit optimal substructure property have efficient dynamic programming solutions. However the subproblems have to be solved before, so as to use the precomputed results in solving the bigger problem. This implies a bottom up approach instead of the top dopwn approach that the above recursive algorithm implements. The following algorithm attempts to solve an apparently harder problem of finding out the optimal combination of changes for all amounts from 1 to the specified input amount. In the following dynamic programming implementation, we get the same result, but by changing the problem definition to one which uses the precomputed values from smaller subproblems more deterministically and efficiently. The algorithm has a complexity of O(Md) for computing the optimal combination for M amount with d denominations :


object DPCoinChanger {
  def dpChange(denominations: List[Int], amount: Int) = {
    var bestNumCoins = new Array[(Int, List[Int])](amount + 1)
    bestNumCoins(0) = (0, List[Int]())
    List.range(1, amount + 1).foreach{=>
      bestNumCoins(a) = (java.lang.Integer.MAX_VALUE, List())
      denominations.foreach{=>
        if (>= d) {
          if (bestNumCoins(- d)._1 + 1 < bestNumCoins(a)._1) {
            bestNumCoins(a) = (bestNumCoins(- d)._1, d :: bestNumCoins(- d)._2)
          }
        }
      }
    }
    bestNumCoins(amount)
  }
}



Implementing well known algorithms is also a nice way to learn a programming language. And in course of this learning process, be sure to stick to the idioms that the language shines in. My initial implementations of the recursive algorithms had some mutable variables, which is, kind of expected, coming from someone thriving on diets of Java (not that there is anything wrong with it). Thinking functionally is one of the newer paradigms that I am learning these days.

4 comments:

Anonymous said...

One of the weird/cool things about Scala is that even when you're writing functional code, you don't really write it with all of the pure-functional idioms. I mean, you still use cons, pattern matching and friends, but you're not constrained to value-only expressions and similar. This flexibility tends to yield a style which is neither functional nor strictly imperative, but some weird hybrid of the two.

Unknown said...

Yes, the convenience factor that multi-paradigm languages offer. In Scala, sometimes the syntax gets in the way. But overall I find it a cool language.

Anonymous said...

I know this is a very old post, but I believe the dynamic example has a bug. It should read:


bestNumCoins(a) = (bestNumCoins(a - d)._1 + 1, d :: bestNumCoins(a - d)._2)

Unknown said...

This post is even older now, but I think there is another bug here

if (bestNumCoins(a - d)._1 + 1 < bestNumCoins(a)._1)

it should be if (bestNumCoins(a - d)._1 < bestNumCoins(a)._1)