HiveBrain v1.2.0
Get Started
← Back to all entries
debugMinor

Fitting a low-degree polynomial to a function on a finite 1d grid - a combinatorial problem?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
problemdegreefinitefittinggridpolynomiallowfunctioncombinatorial

Problem

I need to fit a low-degree polynomial $p$ (with $\text{deg}(p) \leq k$) to a function $f$ evaluated on the grid $\{0, 1, ... n-1\}$, so as to minimize the $L_\infty$ norm, i.e. minimize $\text{max}_{0 \leq i

  • Can it be solved in PTIME (assuming operations on real values are O(1))?



  • If so, what's the numerical stability features of such a solution (since my n is large-ish)?



  • If not, does it admit an efficient (PTIME) approximation scheme?



  • If not, how do people deal with it in practice?

Solution

The problem can be solved in polynomial time using linear programming.

Write $p(x) = c_k x^k + \dots + c_1 x + c_0$, and think of $c_0,\dots,c_k$ as unknowns. Also introduce an unknown $d$, which will represent the $L_\infty$ norm. We will minimize $d$, subject to the following inequality:

$$-d \le f(i) - (c_k i^k + \dots + c_0) \le d.$$

Notice that, since $i$ and $f(i)$ are constants, this is a linear inequality in the unknowns. We obtain one inequality per value of $i$. Now the minimum achievable value of $d$ corresponds to the polynomial $p$ that minimize the $L_\infty$ norm, so you can solve this using any algorithm for linear programming. Since linear programming is in P, your problem is in P, too.

I expect that this algorithm will be efficient in practice, as linear programming over $k+2$ unknowns and $n$ inequalities should be pretty efficient for reasonable values of $k,n$.

I don't know what the numerical stability issues might be.

Another solution in practice might be to find a solution that minimizes the $L_p$ norm. You could start by minimizing the $L_2$ norm, using any standard method (e.g., least squares). Then you could start from that solution and find a solution that minimizes the $L_p$ norm, for some larger value of $p$, say using gradient descent. After increasing $p$ a few times I expect the resulting solution might often turn out to be a reasonable approximation, even if not optimal.

Context

StackExchange Computer Science Q#67590, answer score: 2

Revisions (0)

No revisions yet.