Laurent Hoeltgen

The Abstract Interpolation Problem

2020-10-21, update: 2024-08-25 ·  4 min read  ·  Mathematics

We present the abstract interpolation problem and show how it is related to common interpolation and approximation tasks.

Let F\mathcal{F} be a vector space of functions from R\mathbb{R} to R\mathbb{R} (e.g. smooth functions, polynomials, ...) and let 0,1,,n\ell_0, \ell_1, \ldots, \ell_n be functionals defined on this space F\mathcal{F} that map to R\mathbb{R}. We assume further that FnF\mathcal{F_{n}}\subseteq\mathcal{F} is an n+1n+1 dimensional subspace of F\mathcal{F} with a basis { φi }i=0n\set{\varphi_i}_{i=0}^n. That implies Fn\mathcal{F_{n}} can be written in the following way.

Fn:={φF φ=j=0nαjφj,αjR j} \mathcal{F_n} := \left\lbrace \varphi\in \mathcal{F} \ \Biggl\vert\Biggr.\ \varphi = \sum_{j=0}^n \alpha_j \varphi_j{,}\quad \alpha_j \in \mathbb{R}\ \forall j \right\rbrace

Nearly all interpolation and approximation problems can be reduced to the following setting: we are given the quantities i(f)\ell_i\left(f\right) for an unknown function fFf\in\mathcal{F} and seek a φFn\varphi\in\mathcal{F}_n such that i(φ)=i(f)\ell_i\left(\varphi\right) = \ell_i\left(f\right) holds for every i=0,,ni=0, \ldots, n.

You may think of ff as a physical process or a huge data set. The functionals i\ell_i can then be interpreted as measurements on your process (e.g. temperature, speed, ...) or sample extractions from your data set. In any case, i(f)\ell_i\left(f\right) is data that we are given.

Since φ=jαjφj\varphi=\sum_j\alpha_j\varphi_j the interpolation requirement can be rewritten as follows.

i(φ)=i(jαjφj)=!i(f)i \ell_i\left(\varphi\right) = \ell_i\left( \sum_j\alpha_j\varphi_j\right) \overset{!}{=} \ell_i\left(f\right)\quad \forall i

The previous problem is sometimes referred to as the nonlinear interpolation and without further assumptions on F\mathcal{F} or the i\ell_i it will be difficult to solve. A common restriction is to assume that all the i\ell_i are linear functionals. This leads us then to the following formulation.

j=0nαji(φj)=i(f)i=0,,n \sum_{j=0}^{n}\alpha_j\ell_i\left(\varphi_j\right) = \ell_i\left(f\right)\quad \forall i = 0, \ldots, n

Let us remark that the i(φj)\ell_i\left(\varphi_j\right) are known for all ii and jj and that the i(f)\ell_i\left(f\right) are given and known as well. In view of our examples above, the i(φj)\ell_i\left(\varphi_j\right) can be understood as reference measurements or reference samples.

The interpolation problem has now been reduced to a linear system of n+1n+1 equations with n+1n+1 unknowns αj\alpha_j. It can also be written in matrix vector form as follows.

(0(φ0)0(φ1)0(φn)1(φ0)1(φ1)1(φn)n(φ0)n(φ1)n(φn))(α0α1αn)=(0(f)1(f)n(f)) \begin{pmatrix} \ell_0\left(\varphi_0\right) & \ell_0\left(\varphi_1\right) & \ldots & \ell_0\left(\varphi_n\right) \\ \ell_1\left(\varphi_0\right) & \ell_1\left(\varphi_1\right) & \ldots & \ell_1\left(\varphi_n\right) \\ \vdots & \vdots & \ddots & \vdots \\ \ell_n\left(\varphi_0\right) & \ell_n\left(\varphi_1\right) & \ldots & \ell_n\left(\varphi_n\right) \\ \end{pmatrix} \begin{pmatrix} \alpha_0 \\ \alpha_1 \\ \vdots \\ \alpha_n \end{pmatrix} = \begin{pmatrix} \ell_0\left(f\right) \\ \ell_1\left(f\right) \\ \vdots \\ \ell_n\left(f\right) \end{pmatrix}

Thus, the interpolation problem with linear functionals is solvable, whenever the linear system above has a solution. If the system does not have a solution, we may still consider solving it in a least squares sense. In the latter case we would have an approximation task.

Let us shortly remind the difference between an interpolation and approximation problem. In an interpolation problem we can reach the given data, e.g. we have i(jαjφj)=i(f)\ell_i\left(\sum_j\alpha_j\varphi_j\right) = \ell_i\left(f\right) for all ii. In an approximation task we won't necessarily have equality but minimize the distance between all the i(jαjφj)\ell_i\left( \sum_j\alpha_j\varphi_j\right) and i(f)\ell_i\left(f\right).

It goes without saying, that a good choice for the i\ell_i and φj\varphi_j has a tremendous impact on the existence and properties of the solutions. Ideally, we want to chose the i\ell_i such that the system matrix is either diagonal or at least sparse and banded. The φj\varphi_j should reflect the underlying process/data as good as possible. It doesn't make sense to use smooth functions if your underlying data has known discontinuities!

Example

We assume that i(f):=f(xi)\ell_i(f):=f(x_i) for some known and given numbers xix_i. This means that i\ell_i is the point evaluation functional at xix_i. Note that evaluating functions at a fixed point is a linear operation. For well-posedness reasons we assume that all the xix_i are pairwise different. Also, for F\mathcal{F} we select the space of all polynomials and Fn\mathcal{F_n} shall be the space of all polynomials up to degree nn. In addition, we set φj(x):=xj\varphi_j(x) := x^j (i.e. our basis functions are the monomials). The abstract interpolation problem can then be formulated as the following linear system.

(1x0x0n1x1x1n1xnxnn)(α0α1αn)=(f(x0)f(x1)f(xn)) \begin{pmatrix} 1 & x_0 & \ldots & x_0^n \\ 1 & x_1 & \ldots & x_1^n \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & \ldots & x_n^n \\ \end{pmatrix} \begin{pmatrix} \alpha_0 \\ \alpha_1 \\ \vdots \\ \alpha_n \end{pmatrix} = \begin{pmatrix} f(x_0) \\ f(x_1) \\ \vdots \\ f(x_n) \end{pmatrix}

The linear system above does polynomial interpolation and the system matrix is the famous Vandermonde matrix. The Vandermonde matrix is invertible when all xix_i are pairwise different (hence our assumption above!).

Small remark: if you use not just the function evaluation but also the evaluation of their derivatives you obtain Hermite interpolation.

Final remarks

If the presentation in this article reminded you of neural networks and deep learning models, there's a reason for this. The underlying idea is the same. Neural networks are approximation problems with fancy functionals i\ell_i and spaces F\mathcal{F}.

References

  1. Tariq Rashid, Neuronale Netze selbst programmieren - Ein verständlicher Einstieg mit Python, 1st. edition, 2017, O'Reilly, 978-3-960-09043-4

  2. Philippe Ciarlet, Jacques-Louis Lions (eds.), Handbook of numerical analysis - Techniques of scientific computing (Pt. 1). Numerical methods for solids (Pt. 1). Solution of equations in Rn (Pt. 2), Vol. III, 1994, North-Holland, 978-0-444-89928-6

How to cite this page

Hoeltgen, Laurent: The Abstract Interpolation Problem, 2024-08-25.

BibLaTeX code:
@online{abstract-interpolation,
  author   = {Hoeltgen, Laurent},
  title    = {The Abstract Interpolation Problem},
  date     = {2024-08-25},
  language = english
  url      = {https://www.laurenthoeltgen.name/content/blog/
              abstract-interpolation}
}
Download BibLaTeX file

CC BY-SA 4.0 Laurent Hoeltgen. Last modified: January 26, 2025.
Website built with Franklin.jl and the Julia programming language.
Privacy Policy · Terms