Exercise 3: Linear Tikhonov Inversion

Liz Maag-Capriotti

Linear Tikhonov Inversion

from geoscilabs.inversion.LinearInversionDirect import LinearInversionDirectApp
from ipywidgets import interact, FloatSlider, ToggleButtons, IntSlider, FloatText, IntText
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['font.size'] = 14
app = LinearInversionDirectApp()


This app is based upon the inversion tutorial: "INVERSION FOR APPLIED GEOPHYSICS" by Oldenburg and Li (2005).

Douglas W. Oldenburg and Yaoguo Li (2005) 5. Inversion for Applied Geophysics: A Tutorial. Near-Surface Geophysics: pp. 89-150. eISBN: 978-1-56080-171-9 print ISBN: 978-1-56080-130-6


We illustrate how a generic linear inverse problem can be solved using a Tikhonov approach. The default parameters provided for the Forward and Inverse problems below generate a reasonable example for illustrating the inversion but the learning comes when these parameters are changed and outcomes are observed.


The app is divided into two sections:

Forward Problem

  • Mathematical Background for the Forward Problem

  • Step 1: Create a model, m\mathbf{m}.

  • Step 2: Generate a sensitivity matrix G\mathbf{G}.

  • Step 3: Simulate data (d=Gm\mathbf{d} = \mathbf{G}\mathbf{m}) and add noise.

These steps are explored individually but additional text is given in 2 Linear Tikhonov Inversion. For convenience, the widgets used to carry out all three steps are consolidated at the end of the section. A brief mathematical description is also provided.

Inverse Problem

  • Mathematical Background for the Inverse Problem

  • Step 4: Invert the data, and explore the results

Here we provide widgets to adjust the parameters for the inverse problem. Some basic information is provided but details about the parameters are provided in the text 2 Linear Tikhonov Inversion.

Mathematical Background for the Forward Problem

Let gj(x)g_j(x) denote the kernel function for jjth datum. With a given model m(x)m(x), the jjth datum can be computed by solving following integral equation:

Equation 1.7 Generic representation of a linear functional for forward mapping where djd_j is the jthj^{th} datum, gjg_j the associated kernel function, and mm the model.



Equation 2.1 Oscillatory kernel functions gj(x)g_j(x), each of which correspond to each datum djd_j, that decay with depth. The rate of decay is controlled by pjp_j, and qjq_j controls the frequency.

gj(x)=epjxcos(2πqjx)g_j(x)= e^{p_jx}\cos(2\pi q_jx)

is the jthj^{th} kernel function. By integrating gj(x)g_j(x) over cells of width Δx\Delta x and using the midpoint rule cell we obtain the sensitivities

Equation 2.19 Oscillatory kernel functions, defined in Equation 2.1, written after integrating over cells of a width Δx\Delta x and using the midpoint rule for the discretized calculations in LinearTikhonovInversion_App.

gj(x)=epjxcos(2πqjx)Δx\mathbf{g}_j(\mathbf{x}) = e^{p_j\mathbf{x}} cos (2 \pi q_j \mathbf{x}) \Delta x


  • gj\mathbf{g}_j: jjth row vector for the sensitivty matrix (1×M1 \times M)
  • x\mathbf{x}: model location (1×M1 \times M)
  • pjp_j: decaying constant (<0)
  • qjq_j: oscillating constant (>0)

By stacking multiple rows of gj\mathbf{g}_j, we obtain sensitivity matrix, G\mathbf{G}:

Equation 2.20 The sensitivity matrix G\mathbf{G} is created by stacking multiple rows of kernel functions gj\mathbf{g}_j (LinearTikhonovInversion_App).

G=[g1gN]\begin{aligned} \mathbf{G} = \begin{bmatrix} \mathbf{g}_1\\ \vdots\\ \mathbf{g}_{N} \end{bmatrix} \end{aligned}

Here, the size of the matrix G\mathbf{G} is (N×M)(N \times M). Finally data, d\mathbf{d}, can be written as a linear equation:

Equation 2.3 Expression for the linear forward problem in Equation 2.2, expanded for the N-length data vector d\mathbf{d}.

d=Gm=[d1dN]\begin{aligned} \mathbf{d} = \mathbf{G}\mathbf{m} = \begin{bmatrix} d_1\\ \vdots\\ d_{N} \end{bmatrix}\end{aligned}

where m\mathbf{m} is an inversion model; this is a column vector (M×1M \times 1).

In real measurments, there will be various noise sources, and hence observation, dobs\mathbf{d}^{obs}, can be written as

Equation 2.5 The observed data is a combination of the clean data d\mathbf{d} and the noise n\mathbf{n}.


Step 1: Create a model, m\mathbf{m}

The model mm is a function defined on the interval [0,1] and discretized into MM equal intervals. It is the sum of a: (a) background mbackgroundm_{background}, (b) box car m1m1 and (c) Gaussian m2m2.

  • m_background : background value

The box car is defined by

  • m1 : amplitude
  • m1_center : center
  • m1_width : width

The Gaussian is defined by

  • m2 : amplitude
  • m2_center : center
  • m2_sigma : width of Gaussian (as defined by a standard deviation ϵ\epsilon)
  • M : number of model parameters
Q_model = app.interact_plot_model()

Step2: Generate a sensitivity kernel (or matrix), G\mathbf{G}

By using the following app, we explore each row vector of the sensitivity matrix, gj\mathbf{g}_j. Parameters of the apps are:

  • M: # of model parameters
  • N: # of data
  • p: decaying constant (<0)
  • q: oscillating constant (>0)
  • ymin: maximum limit for y-axis
  • ymax: minimum limit for y-axis
  • show_singular: show singualr values
Q_kernel = app.interact_plot_G()

Step 3: Simulate data, d=Gm\mathbf{d}=\mathbf{Gm}, and add noise

The jj-th datum is the inner product of the jj-th kernel gj(x)g_j(x) and the model m(x)m(x). In discrete form it can be written as the dot product of the vector gj\mathbf{g}_j and the model vector m\mathbf{m}.

Equation 2.2 The linear forward problem in Equation 1.7 evaluated for a discretized model on a 1D mesh.

dj=0x1gj(x)m1dx+x1x2gj(x)m2dx+=i=1M(xk1xkgj(x)dx)midj=gjm\begin{aligned} d_j&=\int_0^{x_1}g_j(x)m_1dx +\int_{x_1}^{x_2}g_j(x)m_2dx+\dots \\ &=\sum^M_{i=1}\left(\int_{x_{k-1}}^{x_k}g_j\left(x\right)dx\right)m_i\\ &\\ d_j &= \mathbf g_j \mathbf m\end{aligned}

If there are NN data, these data can be written as a column vector, d\mathbf{d}:

Equation 2.3 Expression for the linear forward problem in Equation 2.2, expanded for the N-length data vector d\mathbf{d}.

d=Gm=[d1dN]\begin{aligned} \mathbf{d} = \mathbf{G}\mathbf{m} = \begin{bmatrix} d_1\\ \vdots\\ d_{N} \end{bmatrix}\end{aligned}

Adding Noise

Observational data are always contaminated with noise. Here we add Gaussian noise N(0,ϵ)N(0,\epsilon) (zero mean and standard deviation ϵ\epsilon). Here we choose

Equation 2.6 Definition of the standard deviation as a percentage of the datum and a floor value.

ϵj=%dj+νj\epsilon_j = \%|d_j| + \nu_j
Q_data = app.interact_plot_data()

Composite Widget for Forward Modelling


Mathematical Background for the Inverse Problem

In the inverse problem we attempt to find the model m\mathbf{m} that gave rise to the observational data dobs\mathbf{d}^{obs}. The inverse problem is formulated as an optimization problem to minimize:

Equation 2.17 Objective function for the inverse problem which combines the data misfit (Equation 2.7) and chosen definition of the model norm (e.g. Equation 2.12, Equation 2.13, Equation 2.14) with a trade-off parameter β\beta to balance the relative influence of these terms.

ϕ(m)=ϕd(m)+βϕm(m)\phi(m) = \phi_d(m)+\beta\phi_m(m)


  • ϕd\phi_d: data misfit
  • ϕm\phi_m: model regularization
  • β\beta: trade-off (Tikhonov) parameter 0<β<0<\beta<\infty

Data misfit is defined as

Equation 2.7 Data misfit function measures the difference between each predicted datum djd_j and observation djobsd^{obs}_j, normalized by the estimated standard deviation ϵj\epsilon_j.


where ϵj\epsilon_j is an estimate of the standard deviation of the jjth datum, and dj=gjmd_j=\mathbf{g}_j\mathbf{m}.

The model regularization term, ϕm\phi_m, can be written as

Equation 2.14 Combination of the smallest (Equation 2.12) and flattest model norms (Equation 2.13), where the quantities αs\alpha_s and αx\alpha_x are nonnegative constants used to adjust the relative importance of each term.

ϕm=αs(mmref)2dx+αx(d(mmref)dx)2dx\phi_m=\alpha_s\int (m-m^{ref})^2 dx+\alpha_x\int (\frac{d(m-m^{ref})}{dx})^2 dx

The first term is referred to as the "smallness" term. Minimizing this generates a model that is close to a reference model mref\mathbf{m}_{ref}. The second term penalizes roughness of the model. It is generically referred to as a "flattest" or "smoothness" term.

Step 4: Invert the data, and explore the results

In the inverse problem we define parameters needed to evaluate the data misfit and the model regularization terms. We then deal with parameters associated with the inversion.


  • mode: Run or Explore
    • Run: Each click of the app, will run n_beta inversions
    • Explore: Not running inversions, but explore result of the previously run inversions


  • percent: estiamte uncertainty as a percentage of the data (%)

  • floor: estimate uncertainty floor

  • chifact: chi factor for stopping criteria (when ϕd=N\phi_d^{\ast}=N \rightarrow chifact=1)

Model norm

  • mref: reference model

  • alpha_s: αs\alpha_s weight for smallness term

  • alpha_x: αx\alpha_x weight for smoothness term


  • beta_min: minimum β\beta

  • beta_max: maximum β\beta

  • n_beta: the number of β\beta

Plotting options

  • data: obs & pred or normalized misfit

    • obs & pred: show observed and predicted data
    • normalized misfit: show normalized misfit
  • tikhonov: phi_d & phi_m or phi_d vs phi_m

    • phi_d & phi_m: show ϕd\phi_d and ϕm\phi_m as a function of β\beta
    • phi_d vs phi_m: show tikhonov curve
  • i_beta: i-th β\beta value

  • scale: linear or log

    • linear: linear scale for plotting the third panel
    • log: log scale for plotting the third panel
  1. Oldenburg, D. W., & Li, Y. (2005). 5. Inversion for Applied Geophysics: A Tutorial. In Near-Surface Geophysics (pp. 89–150). Society of Exploration Geophysicists. 10.1190/1.9781560801719.ch5