The White Rabbit

If you don't get the requirements right, it doesn't matter how well you do anything else. (Karl Wiegers)

Published on Tuesday, 4 April 2023

Tags: problem-solving2 notes2

Problem Solving: A Checklist For Programmers

Problem solving is a vital skill for programmers, as they often have to create solutions for complex and novel challenges. It involves breaking down a problem into smaller and manageable parts, applying logic and reasoning, testing and debugging, and evaluating the outcomes. The following checklist is based on my personal experience and useful tips for solving competitive programming challenges.


Table of Content

  • Understand the problem and its conditions
  • Brute-force it
  • Search for well-known algorithms, first
  • Optimize (but don't pre-optimize!)
  • Completeness vs. Speed
  • Consider isomorphisms, symmetries or simple transformations
  • Don't repeat yourself
  • Change perspective: think simpler
  • Check your environment
  • Are you a programmer who is stuck on a bug or a feature? Do you feel like you have no idea how to solve your problem? Don't worry, you are not alone. Many programmers face challenges and frustrations every day. That's why I have created this handy problem solving checklist for programmers, hopefully useful in competitive programming contests.

    Competitive programming challenges usually consist of 3 parts: the problem statement, sample inputs and outputs (visible test cases) and hidden test cases. This kind of problems can be considered as solved once the provided code passes all test cases. A single test case is passed if your function's output matches exactly with the expected output for the given input parameter(s).

    Understand the problem and its conditions

    It's always a good idea to take your time to fully understand the problem statement before touching any IDE. I know it may sound trivial but trust me, it's not a good feeling when you start jumping to your keyboard and after two hours of coding you realize that you overlooked important details, so didn't need to implement 80% of your code...

    Afterwards, it's important to figure out whether the assumptions (e.g. reduced input space, upper and lower bounds, special conditions...) can simplify the solution. While the most generalized version of the problem could be expected to not be solved, it can happen that some mentioned limitations help make it smaller and solvable.

    Brute-force it

    Usually, the brute-force approach is not a valid solution for competitive programming challenges, but it can still be implemented for two main purposes:

    • First, it can give you a starting point for refactoring your code.
    • Second, it can help you build your own test cases. Indeed, even if this algorithm is slow, it's for sure an important resource to get the correct output for many user-defined inputs.

    Search for well-known algorithms, first

    If part of the problem can be solved using a well-known algorithm, use (or consider using) it instead of re-inventing the wheel.

    Examples:

    • for maximum flow problems, your can use the Push and Relabel algorithm, or any other whose time-complexity is convenient for your case.
    • for shortest path problems with negative edges, use the Bellman-Ford algorithm. If edges are positive you may prefer the Dijkstra's algorithm or choose others fitting better.
    • for probability computation from state transition observations, use Absorbing Markov Chains.
    • to count all possible walks from node 1 (of graph G1) to node 2 (of graph G2), multiply the adjacency matrix of G1 with the adjacency matrix of G2 and read the the entry (1,2), i.e. the elment on row 1 and column 2. Note: both graphs can have different edges, but must have same vertices!

    Optimize (but don't pre-optimize!)

    Don't immediatly focus on optimization if you are not able to figure out how to solve the problem. Focus on the big picture, first.

    But once you've got some almost-working code, maybe it's just a matter of some optimizations here and there to pass all test cases.

    Some common methods are:

    • Early stopping conditions (branch and bound techniques). Try to reduce the algorithm search space by detecting conditions to exclude search regions. For example, if we are interested in finding all triples (A,B,C) where B%A=0 and C%B=0, if we wrote a nested for-loop through all combinations, we can directly jump to next iterations directly after testing B%A!=0 with no need to test all other pairs (B,C) for that same B.
    • Thinking about special cases and handle them differently. Design different algorithms for different scenarios or particular inputs configurations.
    • Using some heuristic technique to find better initial values to start from.
    • Uinge more suitable data structures

    Example

    In the field of machine learning, heuristic techniques such as k-means clustering are often used to find better initial values for optimization problems such as training neural networks. K-means clustering is an unsupervised learning algorithm that partitions a set of data points into k clusters based on their similarity. The centroids of these clusters can then be used as initial values for optimization algorithms such as gradient descent.

    Completeness vs. Speed

    Instead of using complete algorithms, it could be better to use faster (but uncomplete) ones with different initial values.

    For example, the Breadth-First-Search (BFS) - a graph traversal algorithm that is used to search a tree or graph data structure for a node that meets a set of criteria - is complete. It means that it can be used to find all solutions satisfying our requirements. Instead, the Dept-First-Search (DFS) can be much faster to obtain any compatible solution, but it's not guaranteed that all possible solutions will be found. Here, the outcomes dependent on the initial setting (i.e. defined traverse directions order). If BFS is too slow, simply running DFS multiple times with different starting configurations could be enough to cover all test cases!

    Consider isomorphisms, symmetries or simple transformations

    If there are n equivalent objects that can be easily obtained by applying transformations to any one of them, you could just search for 1 of those and apply simple transformations instead of searching for all equivalent ones.

    For example, if you are filling an adjacency matrix which is symmetric, instead of looping through all possible node pairs, it could be meaningful to only fill the element (i,j) from the upper triangular half and assign its value to the symmetric element (j,i). This will mirror the matrix along its main diagonal and reduce the iterations.

    Equivalent statements. Thinking about different formulations for the same problem can help you handle it with known procedures. For example, in a counting context, putting 3 objects in 10 positions is equivalent of putting 10 objects in 3 positions.

    Don't repeat yourself

    Dynamic Programming. The main idea behind Dynamic Programming is to break the bigger problem into a sequence of smaller problems dependent on each other. This process involves thinking in terms of "inductive reasoning" and often leads to a recursive-like solution. Still, recursive functions should be avoided in many cases (because of recursion limit, higher latencies or smaller call stack). Therefore, you can turn the recursive algorithm into an iterative one as a separate step, later.

    Divide and Conquer. A similar concepts is Divide-and-Conquer, where the main problem is also diveded into smaller sub-problems, but these work in parallel, not depending on each other.

    Memoization (caching). Simply put, build and use look-up tables to get results faster instead of re-computing them every time. You can store a function result into such a "dictionary" and look it up next times instead of executing the same computations again. For example, if the same kind of relationship is used in a deeper nested loop, it's better to build the basic information in parent loops, so that it is used at deeper levels without re-computation. Also remember that inner loops add time complexity to the algorithms, so it's a good idea to skip them or delegate parts of computations at outer levels, whenever possible.

    Visited nodes. If it can help, skip visited states (or nodes) or remove them from future iterations at earlier stages.

    Change perspective: think simpler

    Fit data to the algorithm or the algorithm to data?. Change the rule to make it work for all data or change your data to use a simpler rule?

    Change input. Would it help to directly map the user input to some other (normalized, standardized) input yielding the same solution but much faster?

    Is there a way to transform the current problem into an equivalent problem which makes use of known procedures?

    Example #1

    How many times can the integer A be represented as a sum of B numbers? To understand the mechanism, we can start with some specific values, e.g. B=5, A=10.

    A possible solution set is {1, 1, 1, 1, 6}. Let's now standardize this representation by reaching a fixed length, i.e.

    {1, 1, 1, 1, 1+1+1+1+1+1}

    Instead of looking at the numbers (which are always 1), we can focus on what kind of sign is placed between them. In this way, we obtain 9 "dashes" which can be filled by a plus (+) or comma (,):

    {1_1_1_1_1_1_1_1_1_1}

    Out of these, 4 must be commas (since we are required to use 5 numbers). The problem is much simpler in this form: In how many ways can we place 4 commas? So we simply count the number of ways 4 (=B-1) objects of same type fit into 9 (=A-1) places, which corresponds to 9!/(4!5!).

    We can now generalize with A-1 places and B-1 objects, so that the answer is: combination of A-1 taken B-1 at a time or, equivalently, permutation of two kind of objects, one appearing (B-1) and the other A-(B-1) times.

    Example #2

    Let's say we already have an algorithm to efficiently solve a maximization flow problem where the network has only 1 source and 1 sink and we're asked to solve it for a more general scenario with more sources and more sinks.

    Instead of developing ad-hoc algorithms from scratch, we could simply transform the multi-source multi-sink problem into a single-source and single-sink problem by adding a so-called consolidated source (or supersource), i.e. an additional virtual source connecting to each source and a consolidated sink (or supersink) collecting the flow from each sink. Both having infinite capacity on their virtual edge.

    Check your environment

    Make sure that your test cases are running in the same environment of the online test cases.

    For example, if you're using Python, run your tests with the same Python version. Why? There can be small but very annoying differences which make your code wrong even without logical mistakes. E.g. if your operations involve divisions, Python 2.7 handle them differently than Python 3, so that 3 divided by 2 equals 1 in Python 2.7 - yes! dividing 2 integers with each other yields another integer - but 1.5 with Python >=3.