This post explores basic applications of greedy and dynamic algorithms. Greedy algorithms make locally optimal choices at each step to find a solution, hoping these choices will lead to a globally optimal solution to a problem. On the other hand, dynamic programming algorithms break a problem into smaller overlapping subproblems and solve each subproblem only once, storing the solutions to avoid redundant computations. The difference lies in their approach to subproblems; greedy algorithms make choices based on the current best option, while dynamic programming algorithms systematically solve and store solutions to subproblems for efficient overall problem-solving.
This blog post covers:
- an in-depth breakdown of greedy algorithms + corresponding examples,
- a thorough analysis of dynamic programming algorithms + related examples, and
- the basic usage of greedy and dynamic algorithms.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Greedy algorithms
Greedy algorithms make locally optimal choices at each step to (hopefully) achieve a global optimum.
Note that greedy algorithms typically do not have an optimal substructure. The optimal solution is built step by step by making a series of locally optimal choices. However, these local choices do not necessarily guarantee a globally optimal solution. Two classic examples of greedy algorithm problems include interval scheduling and Huffman coding.
Interval scheduling
Selecting non-overlapping intervals to maximize the number of scheduled activities.
Interval scheduling is where the goal is to select the maximum number of non-overlapping intervals from a set of intervals. Here is a simple example.
Example
Suppose you have a set of activities, each represented by a pair of start and end times. The goal is to select the maximum number of non-overlapping activities.
In this example, the intervals are:
- (1, 3)
- (2, 4)
- (3, 6)
- (5, 7)
- (8, 9)
The algorithm sorts the intervals based on their end times and then iterates through them, selecting non-overlapping intervals. The selected intervals in this case would be:
- (1, 3)
- (5, 7)
- (8, 9)
These intervals do not overlap, and it is impossible to select more non-overlapping intervals. The output of the program would be:
Selected Intervals: [(1, 3), (5, 7), (8, 9)]
Interval scheduling helps schedule tasks, where each interval represents a task’s start and end time, and the goal is to maximize the number of tasks that can be completed without overlapping.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Huffman coding
Building an optimal prefix-free binary tree for data compression.
Huffman coding is a variable-length encoding algorithm that efficiently represents data by assigning shorter codes to more frequently occurring symbols, reducing the overall amount of information needed for storage or transmission.
Huffman coding is like making a secret code for words, where we use shorter secret messages for words we say a lot, and longer secret messages for words we say less often, so we can save space and talk faster.
Example
Suppose we want to encode the message “ABRACADABRA.”
Step 1
Count the frequency of each character in the message.
- A: 5
- B: 2
- R: 2
- C: 1
- D: 1
Step 2
Create a priority queue (or a min-heap) based on the frequencies.
Priority Queue:
(1) C: 1
(2) D: 1
(3) R: 2
(4) B: 2
(5) A: 5
Step 3
Next, we build the Huffman tree by repeatedly combining the two least frequent nodes until only one node remains.
CD
Combine (1) and (2) to create a new node with frequency 2: CD.
Priority Queue:
(3) R: 2
(4) B: 2
(5) A: 5
(6) CD: 2
RBCD
Combine (3) and (4) to create a new node with frequency 4: RBCD.
Priority Queue:
(5) A: 5
(6) CD: 2
(7) RBCD: 4
ACDRBCD
Combine (5) and (6) to create a new node with frequency 7: ACDRBCD.
Priority Queue:
(7) RBCD: 4
(8) ACDRBCD: 7
Huffman tree
Combine (7) and (8) to create the final Huffman tree:
ACDRBCD
/ \
/ \
RBCD A
/ \
R B
/ \
C D
Step 4
Lastly, we assign binary codes to each character based on the tree traversal. We assign 0 for left edges and 1 for right edges.
A: 1
B: 01
R: 000
C: 001
D: 01
The encoded message “ABRACADABRA” would be represented as the binary string:
1000001110010110000011100100
This example represents the basic idea behind Huffman coding, where more frequent characters are represented with shorter codes, leading to a more efficient encoding of the original message.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Code
Here is a technical implementation example of Huffman coding.
Dynamic programming algorithms
Now that we have covered the basics of greedy algorithms, let’s explore dynamic programming. Dynamic programming algorithms break a problem into overlapping subproblems and solve each subproblem only once, storing the solutions for future reference to optimize overall efficiency.
Coin change
The coin change problem is a classic dynamic programming problem that involves finding the number of ways to make change for a given amount using a given set of coin denominations.
Here is an example in Python:
In this example, the function coin_change_ways takes a list of coin denominations (coins) and a target amount (amount). It uses dynamic programming to build a table (dp) where dp[i] represents the number of ways to make change for amount i. The function iterates through each coin denomination, updating the table based on the current coin.
For the provided example, it will output:
There are 5 ways to make change for 6 using the coins [1, 2, 5].
This means five different combinations of coins can create the amount 6 using the given denominations [1, 2, 5].
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Minimum change variation
The coin change problem can also refer to the minimum number of coins needed to make change for a given amount (video).
In this example, the function min_coins calculates the minimum number of coins needed to create 11 using coins with denominations [1, 2, 5]. The program would print the result as the output. If it is impossible to change the given amount, the function returns -1.
Longest increasing subsequence
Longest increasing subsequence (LIS) problems involve finding the length of the longest subsequence where elements are in increasing order. This dynamic programming approach ensures that each subproblem is solved only once, leading to a more efficient solution than a naive recursive approach. Provided below is a pseudocode implementation for an LIS problem.
Pseudocode
Code
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Longest common subsequence
Longest common subsequence (LCS) problems involve finding the length of the longest subsequence common to two sequences. Provided below is a pseudocode implementation for an LCS problem.
Pseudocode
Code
Knapsack and its variants
Solving problems where items have weights and values, the goal is maximizing the total value without exceeding a weight limit.
The knapsack problem is a classic optimization problem with the goal of selecting a subset of items with maximum total value, subject to a constraint on the total weight. Here is an example:
Suppose you have a knapsack with a maximum weight capacity of 10 units, and you have the following items with their respective values and weights:
- !Item 1: Value = $60$, Weight = 5
- !Item 2: Value = $50$, Weight = 3
- Item 3: Value = $70$, Weight = 4
- Item 4: Value = $30$, Weight = 2
The goal is to determine the combination of items that maximizes the total value while keeping the total weight within the knapsack capacity.
One possible solution could be to select Item 2, Item 3, and part of Item 4 (since its weight is 2, and the knapsack capacity is 10). The total value in this case would be $50 + $70 + $30 = $150, and the total weight would be 3 + 4 + 2 = 9, which is within the knapsack capacity of 10.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Multiple knapsack problem
One variant of the classic knapsack problem is the multiple knapsack problem. Instead of having only one knapsack to fill, this variant has multiple knapsacks, each with its own capacity constraint. The goal is to maximize the total value of items placed into the knapsacks without exceeding their individual capacities.
For example, suppose you have three knapsacks with capacities ,
, and
, and a set of items with their respective values and weights:
| Item | Value | Weight |
|---|---|---|
| 1 | 8 | 4 |
| 2 | 10 | 6 |
| 3 | 15 | 8 |
| 4 | 4 | 3 |
The goal is to find the combination of items to include in each knapsack to maximize the total value while respecting the capacity constraints.
Formulation
Decision variables
Let be a binary variable indicating whether item
is placed in knapsack
.
Objective function
Maximize the total value:
Constraints
Ensure that the total weight in each knapsack does not exceed its capacity:
and
Binary constraints
for all
and
.
Solving this variant involves finding the optimal combination of items to include in each knapsack while respecting the capacity constraints of each knapsack.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Matrix chain multiplication
Optimally parenthesizing the multiplication of matrices to minimize the number of scalar multiplications.
The goal of matrix chain multiplication is to find the most efficient way to multiply a sequence of matrices. The efficiency is measured by the total number of scalar multiplications needed to compute the product.
Scalar multiplications involve multiplying each element of a matrix or vector by a single scalar (a single number).
Example
Let’s consider an example with three matrices: A, B, and C. The dimensions of these matrices are as follows:
- Matrix A has dimensions 10×20.
- Matrix B has dimensions 20×30.
- Matrix C has dimensions 30×40.
Here, we want to find the most efficient way to parenthesize these matrices for multiplication, minimizing the total number of scalar multiplications.
One possible parenthesization is as follows:
- Multiply A and B – the resultant matrix AB has dimensions 10×30.
- Multiply the result (AB) with C – the final result ABC has dimensions 10×40.
So, the sequence of multiplications is (A * B) * C.
If we parenthesize differently, such as (A * (B * C)), the dimensions would differ, and the total number of scalar multiplications might be higher.
The key idea in matrix chain multiplication is to find the optimal parenthesization to minimize the total cost of multiplication.
Interval scheduling variants
Adapting interval scheduling to different constraints.
One example of an interval scheduling variant that can be solved using dynamic programming is the weighted interval scheduling problem. Each interval has an associated weight in this variant, and the goal is to maximize the total weight of selected non-overlapping intervals.
Here is a brief description of the problem:
Given a set of intervals, where each interval [start_time, end_time] has an associated weight, find a subset of non-overlapping intervals that maximizes the total weight.
The dynamic programming approach involves sorting the intervals by their finish times and then defining a table to store the maximum weight achievable up to each interval. The recursive formula for calculating the maximum weight at each interval is as follows:
Here is a Python implementation of the dynamic programming solution:
In this example, a tuple represents each interval (start_time, end_time, weight). The function weighted_interval_scheduling calculates the maximum total weight using dynamic programming.
Now that we discussed greedy and dynamic approaches to problems let’s explore the tradeoffs between the two methods.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Trade-offs
What are the trade-offs involved in choosing between greedy and dynamic programming approaches for different problems?
Choosing between greedy and dynamic programming approaches involves considering the trade-offs inherent in each method. Both techniques help solve optimization problems, but they have distinct characteristics that make them suitable for different scenarios. Below, I listed a set of steps to help determine which approach is best.
Greedy approach
- Advantages:
- Efficiency – Greedy algorithms are often more efficient than dynamic programming because they make locally optimal choices at each step without considering the global picture.
- Simplicity – Greedy algorithms are generally easier to conceptualize and implement. They follow a straightforward strategy of making the best choice at each step.
- Disadvantages
- Optimality – Greedy algorithms may not always yield globally optimal solutions. Making the locally optimal choice at each step might lead to a suboptimal overall solution.
- No backtracking – Greedy algorithms make decisions based on current information without revisiting them. Once a decision is made, it cannot be reconsidered, leading to suboptimal results.
- Applicability
- Greedy algorithms are suitable for problems where a series of locally optimal choices leads to a globally optimal solution.
- Examples include the coin change problem, activity selection problem, and Huffman coding.
Dynamic programming approach
- Advantages
- Optimality – Dynamic programming ensures the solution’s optimality by considering all possible subproblems and avoiding redundant computations.
- Versatility – Dynamic programming applies to many problems, including overlapping subproblems and optimal substructure.
- Disadvantages
- Computational complexity- Dynamic programming can be computationally expensive, especially when many overlapping subproblems exist. Memoization or tabulation is often used to mitigate this issue.
- Complex implementation – Dynamic programming solutions can be more complex to design and implement due to the management and storage of intermediate solutions.
- Applicability
- Dynamic programming is well-suited for problems with optimal substructure, where the optimal solution to the problem can be constructed from optimal solutions to its subproblems.
- Examples include the knapsack problem, shortest path problems, and sequence alignment.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Choosing between greedy and dynamic programming
- Nature of the problem
- Dynamic programming is a more suitable choice if the problem exhibits optimal substructure and overlapping subproblems.
- A greedy approach may be more appropriate if the problem can be solved by making a series of locally optimal choices.
- Time and space complexity
- Consider the computational resources available. If efficiency is critical and the problem can be solved greedily, a greedy approach might be preferred.
- Dynamic programming may be better if computational resources allow it and the problem demands an optimal solution.
- Trade-off between optimality and efficiency
- Assess the importance of obtaining the globally optimal solution versus quickly achieving a reasonably good solution. The trade-off often lies between optimality and efficiency.
The choice between greedy and dynamic programming approaches depends on the specific characteristics of the problem at hand and the acceptable trade-offs between optimality and efficiency in a given context. Each approach has its strengths and weaknesses, and understanding these trade-offs is crucial for selecting the most appropriate method for a particular optimization problem.
Usage in computational finance
Almost every field and industry uses greedy and dynamic algorithms. For this blog post, however, we will examine how hedge funds use these algorithms.
Hedge funds use greedy and dynamic programming algorithms to optimize their investment strategies, and improve overall performance.
- Portfolio optimization
- Greedy algorithms help iteratively select assets for a portfolio based on specific criteria, such as maximizing returns or minimizing risk. These algorithms make locally optimal choices at each step, leading to a solution that may not necessarily be globally optimal but is often close enough for practical purposes.
- Dynamic programming can optimize portfolio rebalancing over time. By breaking the problem into subproblems and solving them recursively, dynamic programming ensures an optimal solution by considering the overall impact of decisions made at each step.
- Risk management
- Greedy algorithms can assist with risk management by iteratively adjusting positions or portfolio weights to minimize specific risk factors. For example, these algorithms can help manage exposure to market volatility or other risk metrics.
- Dynamic programming can assist in optimizing risk management strategies over time, considering changing market conditions and adjusting risk parameters.
- Algorithmic trading
- Greedy algorithms can aid in algorithmic trading to make quick and locally optimal decisions at each step. For example, in high-frequency trading, these algorithms can optimize trade execution by minimizing transaction costs or maximizing order fulfillment.
- Dynamic programming is valuable in algorithmic trading strategies involving sequential decisions over time. By considering the impact of each decision on future actions, dynamic programming can lead to more informed and optimized trading strategies.
- Option pricing and hedging
- Dynamic programming is commonly used in derivatives pricing, especially for American-style options, where the optimal exercise strategy may depend on the evolving market conditions.
Note
It is important to note that applying these algorithms at hedge funds often involves sophisticated mathematical models, large datasets, and significant computational resources. Hedge funds employ quantitative analysts (quants) and data scientists who specialize in developing and implementing these algorithms to gain a competitive edge in the financial markets.
🌸👋🏻 Join 10,000+ followers! Let’s take this to your inbox. You’ll receive occasional emails about whatever’s on my mind—offensive security, open source, academics, boats, software freedom, you get the idea.
Conclusion
In this post, we learned that greedy algorithms make locally optimal choices at each stage, while dynamic programming algorithms break problems into overlapping subproblems and optimize by storing solutions to avoid redundant computations.
We also learned that while greedy algorithms can be applied for real-time decision-making, dynamic programming algorithms play a role in optimizing investment portfolios and risk management strategies within hedge funds.
If you enjoyed this post on greedy and dynamic algorithms, consider reading Using Master Theorem Components and Formulations.

