Links

̶
Adam I. Gerard
ISU
NIU
CS MS TBD

A General Method to Reduce Quadratic Time Complexity to Constant Time (Take Two)

A brief blog post summarizing some previous thoughts about polynomial time complexity (and reduction scenarios) from back in 2021, and 2018.

Time Complexity as a First-Order Derivative

I think as engineers/computer scientists we tend to focus on the following type of reasoning:

  1. Supply a function f with an input n, run that once.
  2. Then, increase or decrease the input size of n.
  3. Compute the resultant Big O Notation time complexity.

What if we also considered optimization scenarios w.r.t. subsequent runs of the same input size?

Isn't that just analogous to velocity and acceleration?

Isn't that mathematical property of great value in of itself (to caching - isn't that a primary feature of what caching is and what makes it useful, ahead-of-time computation, premade machine learning models, transpilation, and write-time vs. run-time - consider that it takes a lot longer to write code the first time, and then thereafter it runs in milliseconds)?

I wonder if we're not just onto another way to optimize (by regarding another dimension that engineers might further consider in their optimization efforts) but if we're also onto a fairly ubiquitous and valuable phenomenon found throughout computer science?

In other words, are we sometimes ignoring a potentially valuable technique and mathematical property, in favor of convenience? Seems like a lot of sources of value in the software industry appear to exhibit this kind of polynomial time complexity reduction feature. (Also seems like most interviews, for instance, are fixated on the more narrow conception of time complexity described above. Not necessarily a bad thing given the goals of hiring, interviews, optimization testing, etc. - but might occasionally incline us to forget other approaches as an industry given the incessant and helpful background drone of test preparation marketing.)

Previous Example

Here we pre-generate (AOT) the necessary data into static arrays. Thus, for each subsequent run-through, the algorithm is constant.

(Example simplified/truncated to length 4):

function generate(len) {
  var result = [];
  for (var i = 0; i < len; i++) {
    var temp = [];
    for (var j = 0; j < len; j++) {
      if (i != j) temp.push(j);
    }
    result.push(temp);
  }
  return result;
}

console.log(generate(4));

Then, we can trivially use the above output(s) to calculate the result by using answerArray in O(1):

var answerArray = [[1, 2, 3], [0, 2, 3], [0, 1, 3], [0, 1, 2]];

function one(inputArr) {
    var a =
      inputArr[answerArray[0][0]] *
      inputArr[answerArray[0][1]] *
      inputArr[answerArray[0][2]];
    var b =
      inputArr[answerArray[1][0]] *
      inputArr[answerArray[1][1]] *
      inputArr[answerArray[1][2]];
    var c =
      inputArr[answerArray[2][0]] *
      inputArr[answerArray[2][1]] *
      inputArr[answerArray[2][2]];
    var d =
      inputArr[answerArray[3][0]] *
      inputArr[answerArray[3][1]] *
      inputArr[answerArray[3][2]];

    return [a, b, c, d];
}

console.log(one([0, 1, 2, 3]));
console.log(one([1, 1, 2, 5]));
// Output
[6, 0, 0, 0]
[10, 10, 5, 2]

It's a technique that can be used to streamline brute-force approaches and which might have some practical and applied import for intensive niches like high-performance computing.

I mentioned in my previous blog post how there doesn't appear to be a tremendous amount of research regarding this simple technique (although some similar approaches are undertaken by others) on scant review:

  1. https://www2.cs.duke.edu/courses/spring18/compsci330/Notes/CC.pdf - a good introductory description
  2. https://www.cs.princeton.edu/~wayne/cs423/lectures/reductions-poly-4up.pdf - another good introductory description
  3. https://arxiv.org/search/?query=polynomial+time+complexity+reduction&searchtype=all&abstracts=show&order=-announced_date_first&size=50 - All arxiv Computer Science papers on the subject

I think there are a couple of interesting parallels and some other notions that I hadn't quite grasped before that are worth describing in more detail here.

Memoization and Dynamic Programming

We will often store the state of an algorithm off into some auxillary buffer or data structure to improve performance and reduce recomputation or recursion.

This is a widely accepted technique already (often encouraged to achieve optimal algorithm solutions). Why stop at the callstack of a specific function, thread, closure, scope of function in accepting this kind of approach? Why not apply such techniques more across runs (themselves)?

Integral Calculus Parallels

It's easy enough to represent an algorithm as a function or as a set of coordinates. This is regularly done when reasoning about Big O notation (despite the fact, as many in philosophy and math have pointed out, that such use of diagrams appears to lack much justificatory value at present).

Consider:

  1. https://aaai.org/Papers/Symposia/Fall/1998/FS-98-04/FS98-04-015.pdf - breakthrough diagrammatic proving system able to prove 15 picture-based theorems
  2. https://academic.oup.com/philmat/article-abstract/19/3/281/1387231?redirectedFrom=fulltext
  3. https://www.ucl.ac.uk/philosophy/people/emeritus-professors/marcus-giaquinto
  4. https://plato.stanford.edu/entries/epistemology-visual-thinking/#VisThiPro
  5. http://web.mit.edu/16.070/www/lecture/big_o.pdf - formal definition without recourse to diagrams
  6. https://finematics.com/algorithmic-efficiency-and-big-o-notation/ - graphing Big O notation
  7. https://hott.github.io/book/nightly/hott-ebook-1287-g1ac9408.pdf - especially page 6 regarding “abuse of notation”

Despite the intersection of epistemic worries above and common heuristics (accurate though they might be) found throughout industry, Big O notation is less obvious as a potential offender in the epistemology of mathematics (since it can be straightforwardly represented as a standard Cartesian charting function).

So, I don't think it is too egregious to cross-apply some notions from areas of basic and fundamental mathematics to asymptotic analysis. Are there simple applications for the integral calculus that might apply here?

Perhaps so, in two senses:

  1. Across subroutines and stages/steps
  2. After the first run (caching)

So, time complexity goes to or approaches a limit after an initial measurement.

Very little of the literature in the industry (not academia) appears particularly interested in caching to supplement or augment algorithm design - it's almost solely referenced w.r.t. data persistence and retrieval.

Consider: http://andy.novocin.com/pro/L1-hal.pdf whereby truncation, subroutines, and multiple reduction functions are used to vastly reduce complex problems involving Euclidean lattices.

Power Set Example

A dynamic solution that uses arrays:

/**
 * See: https://stackoverflow.com/questions/24365954/how-to-generate-a-power-set-of-a-given-set
 *
 * Strategy is clean and elegant so I made a few tweaks and implemented it in JS from the above.
 * 
 * Add elements one at a time to a result array, also iterate over the result array adding elements to the existing ones.
 *
 * This accomplished what was done here: https://medium.com/leetcode-patterns/leetcode-pattern-3-backtracking-5d9e5a03dc26 
 * and here: https://jimmy-shen.medium.com/78-subsets-fa4f0b047664 without backtracking (those are excellent articles 
 * and implementations given the requirements!).
 */

const powerSet = arrSet => {
  let arrSets = []
  for (let i = 0; i < arrSet.length; i++) {
    let item = arrSet[i], origSets = [...arrSets]

    for (let j = 0; j < origSets.length; j++) {
      let nSet = origSets[j].concat(item)
      arrSets.push(nSet)
    }
    arrSets.push([item])
  }
  arrSets.push([])
  return arrSets.sort()
}

window.onload = () => {
  console.log(powerSet([4, 2, 3]))
  console.log(powerSet([5, 1, 4, 2, 3]))
  console.log(powerSet([4, 2, 3, 1]))
}
// Outputs
[
    [], 
    [1], 
    [2], 
    [2, 1], 
    [2, 3], 
    [2, 3, 1], 
    [3], 
    [3, 1], 
    [4], 
    [4, 1], 
    [4, 2],
    [4, 2, 1], 
    [4, 2, 3], 
    [4, 2, 3, 1], 
    [4, 3],
    [4, 3, 1]
]

[
    [], 
    [2], 
    [2, 3],
    [3],
    [4],
    [4, 2], 
    [4, 2, 3], 
    [4, 3]
]

[
    [], 
    [1], 
    [1, 2], 
    [1, 2, 3],
    [1, 3],
    [1, 4], 
    [1, 4, 2], 
    [1, 4, 2, 3],
    [1, 4, 3],
    [2],
    [2, 3],
    [3], 
    [4], 
    [4, 2], 
    [4, 2, 3], 
    [4, 3], 
    [5],
    [5, 1],
    [5, 1, 2], 
    [5, 1, 2, 3], 
    [5, 1, 3],
    [5, 1, 4], 
    [5, 1, 4, 2], 
    [5, 1, 4, 2, 3], 
    [5, 1, 4, 3], 
    [5, 2], 
    [5, 2, 3], 
    [5, 3], 
    [5, 4], 
    [5, 4, 2],
    [5, 4, 2, 3], 
    [5, 4, 3]
]

Here's a Greedy approach to Power Set:

/**
 * One can approach this from a few different angles:
 * 1. Brute force.
 * 2. Binary counter.
 * 3. Use native abstractions.
 * ---
 * I'd prefer to use what I understand to be a novel approach by generating 
 * constraints AOT (ahead-of_time) so that power-set 
 * generation is linear for every subsequent run.
 * ---
 * We can then use these values to populate others by index (e.g. - 1 means in the subset, 0 not).
 * The upside is that this is superfast (most powerset implementations 
 * involve deep recursion and 2-3 nested loops).
 * ----
 * The downside of this approach is that only works if you know the desired 
 * set size beforehand. It's Greedy.
 */

const generateConstraints = () => {
  let result = [], current = []

  // Flattened nested loop...
  for (let i = 0, j = 0, k = 0, l = 0; i < 2 && j < 2 && k < 2 && l < 2; i) {
    current.push(i)
    current.push(j)
    current.push(k)
    current.push(l)
    result.push(current)
    current = []

    l++;
    if (l == 2) {
      k++
      l = 0
    }
    if (k == 2) {
      j++
      k = 0
    }
    if (j == 2) {
      i++
      j = 0
    }
  }

  return result
}

window.onload = () => {
  const aot = generateConstraints()
  console.log(aot)
}
// Output
[
    [0, 0, 0, 0], // [],
    [0, 0, 0, 1], // [4], 
    [0, 0, 1, 0], // [3], 
    [0, 0, 1, 1], // [4, 3],
    [0, 1, 0, 0], // [2], 
    [0, 1, 0, 1], // [4, 2],
    [0, 1, 1, 0], // [2, 3], 
    [0, 1, 1, 1], // [4, 2, 3], 
    [1, 0, 0, 0], // [1], 
    [1, 0, 0, 1], // [4, 1], 
    [1, 0, 1, 0], // [3, 1], 
    [1, 0, 1, 1], // [4, 3, 1]
    [1, 1, 0, 0], // [2, 1], 
    [1, 1, 0, 1], // [4, 2, 1], 
    [1, 1, 1, 0], // [2, 3, 1], 
    [1, 1, 1, 1] //  [4, 2, 3, 1], 
]

Generalizing to Arbitrary Input Lengths

I previously described a scenario where a problem can be reduced to constant time following an initial caching step (e.g. - a pre-computation or ahead-of-time method, if you prefer - a class of answer that's now an accepted standard even on LeetCode) that's in quadratic time.

The initial approach is limited since it applies exclusively to a greedy scenario with a fixed output space and known array length.

Can this be generalized:

  1. To all array lengths?
  2. To varying output lengths?

The answer is yes to the first question. Here's how:

  1. Abstract variable declarations to an auxiliary array or buffer such that each index of the auxiliary array stands for a variable declaration.
  2. Incrementation is performed at each index and spillover occurs once the cap is reached (the current index is set to 0, the next is incremented instead, etc.).

So, in the code snippet at the top, the following line can be replaced:

for (var i = 0, j = 0, k = 0, l = 0; i < 3 && j < 3 && k < 3 && l < 3; i) {

with a suitable array to capture these counts/counters:

var counters = [0,0,0,0]

Formal Properties of the Class

Conjecture: Any algorithm characterized by the following features will give rise to a time complexity reduction scenario like those outlined above:

  1. A finite set of input elements.
  2. The existence of some function f uniformly applied to those finite elements.
  3. The number of input elements are fixed in size or predictably constrained.
  4. Function f applies operations in formally defined loops.

This should come across as fairly obvious. (Scenarios of Autoreducible Reduction are strictly subsets.)

Auto Compilation

Consider a language like Java which provides both AOT and JIT compilation (with a fairly strong preference for the former typically held by developers for all of its many advantages).

Could the above technique be used to:

  1. Enhance certain kinds of mundane operations in the ways tentatively sketched above.
  2. Be automatically converted "into the technique" I described above during the normal compilation process. Perhaps something the return value of something like generateConstraints() (above) would automatically be saved or persisted (somewhere) into memory (stack, heap, some supplementary memory store) to convert certain specific functions (say via annotation or just generally - probably preferably configurable) in the manner specified above?

Or say in a manner similar to Azure Function or AWS Lambda hot contexts? (You spin up a cold context then subsequent executions of that function are faster.) Not only analogically (at the level of a metaphorical hot context) but also optimizing looped operations themselves within the function.

Summary

The main insight here is that time complexity is often thought of strictly in terms of worst and best (or, often as just an average) with no consideration about change or stable (permanent) fluctuations (I'm thinking velocity vs. acceleration in physics - the second-order derivative). (After the permanent change it makes little sense to include all previous runs in the average!)

This smacks the philosopher in me as fallacious - the fallacy of false dichotomies or binary (pun), black/white, thinking.

Time complexity can vary quite a bit, especially if we intend on breaking that model by caching or precomputing aggressively. While not always beneficial (and sometimes cumbersome if poorly implemented), there are many simple scenarios where such techniques might be quite advantageous.

Additionally, it seems that there's widespread acceptance of the view that polynomial time complexity is an extremely variegated concept and somewhat imprecise.

For example, consider the article on Wikipedia: https://en.wikipedia.org/wiki/Time_complexity (just a few, related, keywords):

  1. Sub-quadratic time
  2. Strongly and weakly polynomial time
  3. Superpolynomial time
  4. Quasi-polynomial time
  5. Quasilinear time
  6. Polylogarithmic time

This should come as no surprise since Big O notation itself has two fairly common interpretations:

  1. Worst case runtime
  2. And best/worst runtime

Worst case is the most widespread usage but it does vary. (Enough that it has caused some confusion and is mentioned but not cited on the Wikipedia article. Some of the MOOC content from Stanford mentions this imprecision as well.)

I hope my technique seems interesting and that I've raised some interesting concerns about the dogmatic use of a very narrow, single, polynomial time complexity / Big O definition.

Contents