请输入您要查询的百科知识:

 

词条 Inverse problem
释义

  1. History

  2. Conceptual understanding

  3. General statement of the problem

  4. Linear inverse problems

     Examples  Unisolvent functions  Earth's gravitational field  Fredholm integral  Computed tomography   Diffraction tomography   Doppler tomography (astrophysics)   Riemann hypothesis  Permeability matching in shale-gas reservoirs  Deconvolution   Mathematical and computational aspects    Numerical solution of the optimization problem    Stability, regularization and model discretization in infinite dimension  

  5. Non-linear inverse problems

  6. Applications

  7. See also

     Academic journals 

  8. References

  9. References

  10. Further reading

  11. External links

{{For|the use in geodesy|Inverse geodetic problem}}

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field.

It is called an inverse problem because it starts with the results and then calculates the causes. This is the inverse of a forward problem, which starts with the causes and then calculates the results.

Inverse problems are some of the most important mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They have wide application in system identification, optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, and many other fields.

History

One of the earliest examples of a solution to an inverse problem was discovered by Hermann Weyl and published in 1911, describing the asymptotic behavior of eigenvalues of the Laplace–Beltrami operator.[1] Today known as Weyl's law, it is perhaps most easily understood as an answer to the question of whether it is possible to hear the shape of a drum. Weyl conjectured that the eigenfrequencies of a drum would be related to the area and perimeter of the drum by a particular equation, which has later been improved upon by many mathematicians.

The field of inverse problems was later touched on by Soviet-Armenian physicist, Viktor Ambartsumian.[2][3]

While still a student, Ambartsumian thoroughly studied the theory of atomic structure, the formation of energy levels, and the Schrödinger equation and its properties, and when he mastered the theory of eigenvalues of differential equations, he pointed out the apparent analogy between discrete energy levels and the eigenvalues of differential equations. He then asked: given a family of eigenvalues, is it possible to find the form of the equations whose eigenvalues they are? Essentially Ambartsumian was examining the inverse Sturm–Liouville problem, which dealt with determining the equations of a vibrating string. This paper was published in 1929 in the German physics journal Zeitschrift für Physik and remained in obscurity for a rather long time. Describing this situation after many decades, Ambartsumian said, "If an astronomer publishes an article with a mathematical content in a physics journal, then the most likely thing that will happen to it is oblivion."

Nonetheless, toward the end of the Second World War, this article, written by the 20-year-old Ambartsumian, was found by Swedish mathematicians and formed the starting point for a whole area of research on inverse problems, becoming the foundation of an entire discipline.

Then important efforts have been devoted to a "direct solution" of the inverse scattering problem especially by Gelfand and Levintan in the Soviet Union[4]. They proposed an analytic constructive method for determining the solution. When computers became available, some authors have investigated the possibility of applying their approach to similar problems such as the inverse problem in the 1D wave equation. But it rapidly turned out that the inversion is an unstable process : noise and errors can be tremendously amplified making a direct solution hardly practicable.

Then the least-squares and probabilistic approaches came in.

Conceptual understanding

The inverse problem can be conceptually formulated as follows:

Data → Model parameters

The inverse problem is considered the "inverse" to the forward problem which relates the model parameters to the data that we observe:

Model parameters → Data

The transformation from data to model parameters (or vice versa) is a result of the interaction of a physical system with the object that we wish to infer properties about. In other words, the transformation is the physics that relates the physical quantity (i.e., the model parameters) to the observed data.

The table below shows some examples of physical systems, the governing physics, the physical quantity that we are interested, and what we actually observe.

Physical system Governing equations Physical quantity Observed data
Earth's gravitational field Newton's law of gravity Density Gravitational field
Earth's magnetic field (at the surface) Maxwell's equations Magnetic susceptibility Magnetic field
Seismic waves (from earthquakes) Wave equation Wave-speed (density) Particle velocity

Linear algebra is useful in understanding the physical and mathematical construction of inverse problems, because of the presence of the transformation or "mapping" of data to the model parameters.

General statement of the problem

The objective of an inverse problem is to find the best model parameters such that (at least approximately)

where is an operator describing the explicit relationship between the observed data, , and the model parameters. In various contexts, the operator is called forward operator, observation operator, or observation function. In the most general context, represents the governing equations that relate the model parameters to the observed data (i.e., the governing physics).

When operator is linear, the inverse problem is linear. Otherwise, that is most often, the inverse problem is nonlinear.

Also, models cannot always be described by a finite number of parameters, for example if we look for distributed parameters : in such cases the goal of the inverse problem is to retrieve one or several functions. Such inverse problems are inverse problems with infinite dimension.

Finally, the observed data, , are most often corrupted by noise so that the equation above may not make sense (it may for instance have no solution). So that, in practice, we are primarily interested in finding the model that "best matches the data" : this is the base of the least-squares approach. We come back to this point in the next section.

As mentioned in the historical introduction, instability or ill-posedness, is one of the characteristics of inverse problems. Hence, each time we attempt at computing a solution, it is essential to address the question : to what extent can we trust the computed solution ? We will come back to this crucial question, especially in the "mathematical aspects" sections.

Linear inverse problems

In the case of a discrete linear inverse problem describing a linear system, (the data) and (the best model) are vectors, and the problem can be written as

where is a matrix (an operator), often called the observation matrix.

Examples

Unisolvent functions

{{details|Unisolvent functions}}

In mathematics, the simplest linear inverse problems are when one has a set of unisolvent functions, meaning a set of {{tmath|n}} functions such that evaluating them at {{tmath|n}} distinct points yields a set of linearly independent vectors. This means that given a linear combination of these functions, the coefficients can be computed by arranging the vectors as the columns of a matrix and then inverting this matrix. The simplest example of unisolvent functions are polynomials (of degree at most {{tmath|d}}), which are unisolvent by the unisolvence theorem. Concretely, this is done by inverting the Vandermonde matrix.

For example, given a polynomial of degree at most 1, , evaluating it at 0 yields , while evaluating it at 1 yields . Thus the coefficients can be computed from the values as . Expressed in matrix form, this is:

The observation matrix {{tmath|G}} is the Vandermonde matrix for the values 0 and 1, and multiplying the matrices yields:

The observation matrix {{tmath|G}} is invertible, and the inverse is:

Thus:

For larger systems of equations, the matrix form is easier to compute and analyze than the equivalent system of equations.

Earth's gravitational field

Only a few physical systems are actually linear with respect to the model parameters. One such system from geophysics is that of the Earth's gravitational field. The Earth's gravitational field is determined by the density distribution of the Earth in the subsurface. Because the lithology of the Earth changes quite significantly, we are able to observe minute differences in the Earth's gravitational field on the surface of the Earth. From our understanding of gravity (Newton's Law of Gravitation), we know that the mathematical expression for gravity is:

where is a measure of the local gravitational acceleration, is the universal gravitational constant, is the local mass (which is related to density) of the rock in the subsurface and is the distance from the mass to the observation point.

By discretizing the above expression, we are able to relate the discrete data observations on the surface of the Earth to the discrete model parameters (density) in the subsurface that we wish to know more about. For example, consider the case where we have 5 measurements on the surface of the Earth. In this case, our data vector, is a column vector of dimension (5x1). We also know that we only have five unknown masses in the subsurface (unrealistic but used to demonstrate the concept). Thus, we can construct the linear system relating the five unknown masses to the five data points as follows:

The system has five equations, , with five unknowns, . To solve for the model parameters that fit our data, we might be able to invert the matrix to directly convert the measurements into our model parameters. For example:

However, not all square matrices are invertible ( is almost never invertible). This is because we are not guaranteed to have enough information to uniquely determine the solution to the given equations unless we have independent measurements (i.e. each measurement adds unique information to the system). It's important to note that in most physical systems, we do not ever have enough information to uniquely constrain our solutions because the observation matrix does not contain unique equations. From a linear algebra perspective, the matrix is rank deficient (i.e. has zero eigenvalues), meaning that is not invertible. Further, if we add additional observations to our matrix (i.e. more equations), then the matrix is no longer square. Even then, we're not guaranteed to have full-rank in the observation matrix. Therefore, most inverse problems are considered to be underdetermined, meaning that we do not have unique solutions to the inverse problem. If we have a full-rank system, then our solution may be unique. Overdetermined systems (more equations than unknowns) have other issues.

Because we cannot directly invert the observation matrix, we use methods from optimization to solve the inverse problem. To do so, we define a goal, also known as an objective function, for the inverse problem. The goal is a functional that measures how close the predicted data from the recovered model fits the observed data. In the case where we have perfect data (i.e. no noise) and perfect physical understanding (i.e. we know the physics) then the recovered model should fit the observed data perfectly. The standard objective function, , is usually of the form:

which represents the 2-norm of the misfit between the observed data and the predicted data from the model. We use the 2-norm here as a generic measurement of the distance between the predicted data and the observed data, but other norms are possible for use. The goal of the objective function is to minimize the difference between the predicted and observed data.

To minimize the objective function (i.e. solve the inverse problem) we compute the gradient of the objective function using the same rationale as we would to minimize a function of only one variable. The gradient of the objective function is:

where GT denotes the matrix transpose of G. This equation simplifies to:

After rearrangement, this becomes:

This expression is known as the Normal Equation and gives us a possible solution to the inverse problem. It is equivalent to ordinary least squares:

Additionally, we usually know that our data has random variations caused by random noise, or worse yet coherent noise. In any case, errors in the observed data introduce errors in the recovered model parameters that we obtain by solving the inverse problem. To avoid these errors, we may want to constrain possible solutions to emphasize certain possible features in our models. This type of constraint is known as regularization.

Very similar to the least-squares approach, is the probabilistic approach : if we know the statistics of the noise that contaminates the data, we can think of seeking the most likely model m, that is the model that matches the maximum likelihood criterion. If the noise is Gaussian, the maximum likelihood criterion appears as a least-squares criterion, the Euclidian scalar product in data space being replaced by a scalar product involving the co-variance of the noise. Also, should prior information on model parameters be available, we could think of using Bayesian inference to formulate the solution of the inverse problem. This approach is described in detail in Tarantola's book[5] . We can also introduce constraints in the space of models to integrate prior information on the model. Finally, the Euclidian norm, used to to quantify the data misfit, is known to be very sensitive to outliers : in such cases the L-1 norm should be preferred to the L-2 norm (Euclidian Norm). But then, the minimization of the associated objective function can be very difficult.

Fredholm integral

One central example of a linear inverse problem is provided by a Fredholm integral equation of the first kind:

For sufficiently smooth the operator defined above is compact on reasonable Banach spaces such as Lp spaces. Even if the mapping is injective, its inverse will not be continuous. (However, by the bounded inverse theorem, if the mapping is bijective, then the inverse will be bounded (i.e. continuous).) Thus small errors in the data are greatly amplified in the solution . In this sense the inverse problem of inferring from measured is ill-posed.

To obtain a numerical solution, the integral must be approximated using quadrature, and the data sampled at discrete points. The resulting system of linear equations will be ill-conditioned.

Computed tomography

Another example is the inversion of the Radon transform, essential to tomographic reconstruction for X-ray computed tomography. Here a function (initially of two variables) is deduced from its integrals along all possible lines. Although from a theoretical point of view many linear inverse problems are well understood, problems involving the Radon transform and its generalisations still present many theoretical challenges with questions of sufficiency of data still unresolved. Such problems include incomplete data for the x-ray transform in three dimensions and problems involving the generalisation of the x-ray transform to tensor fields. Solutions explored include Algebraic Reconstruction Technique, filtered backprojection, and as computing power has increased, iterative reconstruction methods such as iterative Sparse Asymptotic Minimum Variance[6].

Diffraction tomography

Diffraction tomography is a classical linear inverse problem in exploration seismology : the amplitude recorded at one time for a given source-receiver pair is the sum of contributions arising from points such that the sum of the distances, measured in traveltimes, from the source and the receiver, respectively, is equal to the corresponding recording time. Should the propagation velocity be constant, such points are distributed on an ellipsoid. The inverse problems consists in retrieving the distribution of diffracting points from the seismograms recorded along the survey, the velocity distribution being known. This problem thus appears as very similar to computed x-ray tomography : instead of inverting for quantities summed along lines, we have to invert for quantities summed along an ellipsoid like surface. A direct solution has been originally proposed by Beylkin and Lambaré et al. [7] : these works were the starting points of approaches known as amplitude preserved migration (see Beylkin[8][9] and Bleistein[10]). Other methods are based on the least-squares approach (see Lailly[11], Tarantola[12]) : they are known as least-squares migration[13].

Doppler tomography (astrophysics)

If we consider a rotating stellar object, the lines we can observe on a spectral profile will be shifted due to Doppler effect. Doppler tomography aims at converting the information contained in spectral monitoring of the object into a 2D image of the emission (as a function of the radial velocity and of the phase in the periodic rotation movement) of the stellar atmosphere. As explained in Marsh[14] this linear inverse problem is tomography like : we have to recover a distributed parameter which has been integrated along lines to produce its effects in the recordings.

Riemann hypothesis

A final example related to the Riemann hypothesis was given by Wu and Sprung, the idea is that in the semiclassical old quantum theory the inverse of the potential inside the Hamiltonian is proportional to the half-derivative of the eigenvalues (energies) counting function n(x).

Permeability matching in shale-gas reservoirs

To accurately reproduce the permeability, a new method based on a combination of the Metropolis-Hastings and the genetic algorithms. The new method learns from its own previously generated realizations of the shale and produces models that match the existing permeability data.[15]

Deconvolution

A classical example of inverse problems is image (or signal) deblurring, i.e., a deconvolution problem in the plane. In such cases, the forward problem is a convolution with a smoothing convolution kernel. Considering the integral equation (of the Freholm type 1):

where is the kernel, and . The inverse problem is to reconstruct the original image based on a noisy and blurred image .[16]

Mathematical and computational aspects

Inverse problems are typically ill posed, as opposed to the well-posed problems more typical when modeling physical situations where the model parameters or material properties are known. Of the three conditions for a well-posed problem suggested by Jacques Hadamard (existence, uniqueness, and stability of the solution or solutions) the condition of stability is most often violated. In the sense of functional analysis, the inverse problem is represented by a mapping between metric spaces. While inverse problems are often formulated in infinite dimensional spaces, limitations to a finite number of measurements, and the practical consideration of recovering only a finite number of unknown parameters, may lead to the problems being recast in discrete form. In this case the inverse problem will typically be ill-conditioned. In these cases, regularization may be used to introduce mild assumptions on the solution and prevent overfitting. Many instances of regularized inverse problems can be interpreted as special cases of Bayesian inference[17].

Numerical solution of the optimization problem

When the model is described by a large number of parameters (the number of unknowns involved in some diffraction tomography applications can reach one billion), solving the linear system associated with the normal equations can be cumbersome. The numerical method to be used for solving the optimization problem depends in particular on the cost required for computing the solution Gm of the forward problem. Once chosen the appropriate algorithm for solving the forward problem, the appropriate algorithm for carrying out the minimization can be found in textbooks dealing with numerical methods for linear algebra and for the minimization of quadratic functions (see for instance Ciarlet[18] or Nocedal[19]). Also the user may wish to add physical constraints to the models : in this case he has to be familiar with constrained optimization methods, a subject in itself. In all cases, computing the gradient of the objective function often is a key element for the solution of the optimization problem.

Should the objective function be based on a norm other than the Euclidian norm, we have to leave the area of quadratic optimization. As a result the optimization problem becomes more difficult. In particular, when the L-1 norm is used for quantifying the data misfit the objective function is no longer differentiable : its gradient does not make sense any longer. Dedicated methods (see for instance Lemaréchal[20]) from non differentiable optimization come in.

Once, the optimal model computed, we have to address the question : "can we trust this model" ? The question can be formulated as follows : how large is the set of models that match the data "nearly as well" as this model. In the case of quadratic objective functions, this set is a hyper-ellipsoid, a subset of RN where N is the number of unknowns, whose size depends on what we mean with "nearly as well", that is on the noise level. The direction of the largest axis of this ellipsoid (eigenvector associated with the smallest eigenvalue of matrix GTG) is the direction of poorly determined components : if we follow this direction, we can bring a strong perturbation to the model without changing significantly the value of the objective function and thus end up with a significantly different quasi-optimal model. We clearly see that the answer to the question "can we trust this model" is governed by the noise level and by the eigenvalues of the Hessian of the objective function or equivalently, in the case where no regularization has been integrated, by the singular values of matrix . Of course, use of regularization (or prior information) reduces the size of the set of almost optimal solutions and, in turn, increases the confidence we can put in the computed solution.

Stability, regularization and model discretization in infinite dimension

When looking for distributed parameters (in other words when the unknowns do not consist in a finite numer of parameters but are functions), we have to discretize these functions. Doing so, we reduce the dimension of the problem to something finite. But now, the question is : is there any link between the solution we compute and the one of the initial problem ? First, another question : what do we mean with the solution of the initial problem ? A finite number of data does not allow the determination of an infinity of unknowns ! Thus the inverse has to be regularized to ensure uniqueness of the solution. Many times, reducing the unknowns to a finite dimensional space will provide an adequate regularization : the computed solution will look like a discrete version of the solution we were looking for. For example, a naive discretization will often work for solving the deconvolution problem : it will work as long as we do not allow missing high frequencies to show up in the numerical solution. But many times, regularization has to be integrated explicitly in the objective function. In some cases, the classical Tikhonov regularization may be inadequate to make the inverse problem well-posed[21], the goal being to ensure existence, uniqueness and stability of the computed solution. Yet, as in the finite dimension case, we have to question the confidence we can put in the computed solution. Again, basically, the information lies in the eigenvalues of the Hessian operator : well determined components of the solution lie in subspaces of the space of models generated by eigenvectors associated with large eigenvalues of the Hessian. Should subspaces containing eigenvectors associated with small eigenvalues be explored for computing the solution, then the solution can hardly be trusted : some of its components will be poorly determined. In some cases, the Hessian is not a bounded bounded operator if we naively equip the space of models with the L-2 norm (or Euclidan norm in the discrete version) : in such a case the notion of eigenvalue does not make sense any longer. A mathematical analysis is required to make it a bounded operator : an illustration can be found in [22]. However, the analysis of the spectrum of the Hessian operator is usually a very heavy task. This has led several authors to investigate alternative approaches in the case where we are not interested in all the components of the unknown function but only in sub-unknowns that are the images of the unknown function by a linear operator. These approaches are referred to as the " Backus and Gilbert method[23]", Lions's sentinels approach[24], and the SOLA method[25] : these approaches turned out to be strongly related with one another as explained in Chavent[26] Finally, the concept of limited resolution, often invoked by physicists, is nothing but a specific view of the fact that some poorly components may corrupt the solution. But, generally speaking, these poorly determined components of the model are not necessarily associated with high frequencies !

Non-linear inverse problems

An inherently more difficult family of inverse problems are collectively referred to as non-linear inverse problems.

Non-linear inverse problems have a more complex relationship between data and model, represented by the equation:

Here is a non-linear operator and cannot be separated to represent a linear mapping of the model parameters that form into the data. In such research, the first priority is to understand the structure of the problem and to give a theoretical answer to the three Hadamard questions (so that the problem is solved from the theoretical point of view). It is only later in a study that regularization and interpretation of the solution's (or solutions', depending upon conditions of uniqueness) dependence upon parameters and data/measurements (probabilistic ones or others) can be done. Hence the corresponding following sections do not really apply to these problems. Whereas linear inverse problems were completely solved from the theoretical point of view at the end of the nineteenth century, only one class of nonlinear inverse problems was so before 1970, that of inverse spectral and (one space dimension) inverse scattering problems, after the seminal work of the Russian mathematical school (Krein, Gelfand, Levitan, Marchenko). A large review of the results has been given by Chadan and Sabatier in their book "Inverse Problems of Quantum Scattering Theory" (two editions in English, one in Russian).

In this kind of problem, data are properties of the spectrum of a linear operator which describe the scattering. The spectrum is made of eigenvalues and eigenfunctions, forming together the "discrete spectrum", and generalizations, called the continuous spectrum. The very remarkable physical point is that scattering experiments give information only on the continuous spectrum, and that knowing its full spectrum is both necessary and sufficient in recovering the scattering operator. Hence we have invisible parameters, much more interesting than the null space which has a similar property in linear inverse problems. In addition, there are physical motions in which the spectrum of such an operator is conserved as a consequence of such motion. This phenomenon is governed by special nonlinear partial differential evolution equations, for example the Korteweg–de Vries equation. If the spectrum of the operator is reduced to one single eigenvalue, its corresponding motion is that of a single bump that propagates at constant velocity and without deformation, a solitary wave called a "soliton".

A perfect signal and its generalizations for the Korteweg–de Vries equation or other integrable nonlinear partial differential equations are of great interest, with many possible applications. This area has been studied as a branch of mathematical physics since the 1970s. Nonlinear inverse problems are also currently studied in many fields of applied science (acoustics, mechanics, quantum mechanics, electromagnetic scattering - in particular radar soundings, seismic soundings, and nearly all imaging modalities).

Applications

Inverse problem theory is used extensively in weather predictions, oceanography, hydrology, and petroleum engineering.[27][28]

Inverse problems are also found in the field of heat transfer, where a surface heat flux[29] is estimated outgoing from temperature data measured inside a rigid body. The linear inverse problem is also the fundamental of spectral estimation and direction-of-arrival (DOA) estimation in signal processing.

Inverse, parameter and crack identification problems have been studied, by using optimization and soft computing tools.

[30][31]

See also

  • Atmospheric sounding
  • Backus–Gilbert method
  • Computed tomography
    • Algebraic reconstruction technique
    • Filtered backprojection
    • Iterative reconstruction
  • Data assimilation
  • Engineering optimization
  • Grey box model
  • Mathematical geophysics
  • Optimal estimation
  • Seismic inversion
  • Tikhonov regularization
  • Compressed sensing

Academic journals

Four main academic journals cover inverse problems in general:

  • Inverse Problems
  • Journal of Inverse and Ill-posed Problems[32]
  • Inverse Problems in Science and Engineering[33]
  • Inverse Problems and Imaging[34]

Many journals on medical imaging, geophysics, non-destructive testing, etc. are dominated by inverse problems in those areas.

References

1. ^{{cite journal |last=Weyl |first=Hermann |url=http://gdz.sub.uni-goettingen.de/dms/load/img/?IDDOC=63048 |title=Über die asymptotische Verteilung der Eigenwerte |journal=Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen |pages=110–117 |year=1911 }}
2. ^» Epilogue — Ambartsumian’ s paper Viktor Ambartsumian
3. ^{{cite journal|title=A life in astrophycis. Selected papers of Viktor A. Ambartsumian|first=Rouben V.|last=Ambartsumian|journal=Astrophysics|volume=41|issue=4|pages=328–330|doi=10.1007/BF02894658|year = 1998}}
4. ^{{cite journal |last1=Burridge |first1=Robert |title=The Gelfand-Levitan, the Marchenko, and the Gopinath-Sondhi integral equations of inverse scattering theory, regarded in the context of inverse impulse-response problems |journal=Wave Motion |date=1980 |volume=2 |issue=4 |pages=305 - 323 |doi=https://doi.org/10.1016/0165-2125(80)90011-6 |url=http://www.sciencedirect.com/science/article/pii/0165212580900116}}
5. ^{{cite book|chapter-url=http://www.ipgp.fr/~tarantola/Files/Professional/Books/InverseProblemTheory.pdf|title=Inverse Problem Theory and Methods for Model Parameter Estimation|pages=i-xii|first=Albert|last=Tarantola|publisher=SIAM|via=epubs.siam.org|doi=10.1137/1.9780898717921.fm|chapter=Front Matter|year=2005|isbn=978-0-89871-572-9}}
6. ^{{cite journal | last=Abeida | first=Habti | last2=Zhang | first2=Qilin | last3=Li | first3=Jian | last4=Merabtine | first4=Nadjim | title=Iterative Sparse Asymptotic Minimum Variance Based Approaches for Array Processing | journal=IEEE Transactions on Signal Processing | volume=61 | issue=4 | year=2013 | issn=1053-587X | doi=10.1109/tsp.2012.2231676 | pages=933–944 | url=https://qilin-zhang.github.io/_pages/pdfs/SAMVpaper.pdf | arxiv=1802.03070 }}
7. ^{{cite journal |last1=Lambaré |first1=Gilles |last2=Virieux |first2=Jean |last3=Madariaga |first3=Raul |last4=Jin |first4=Side |title=Iterative asymptotic inversion in the acoustic approximation |journal=Geophysics |date=1992 |volume=57 |issue=9 |pages=1138-1154}}
8. ^{{cite journal |last1=Beylkin |first1=Gregory |title=The inversion problem and applications oThe generalized Radon transform |journal=Communications on Pure and Applied Mathematics |date=1984 |volume=XXXVII |pages=579-599}}
9. ^{{cite journal |last1=Beylkin |first1=Gregory |title=Imaging of discontinuities in the inverse scaterring problem by inversion of a causal generalized Radon transform |journal=J. Math. Phys. |date=1985 |volume=26 |pages=99–108}}
10. ^{{cite journal |last1=Bleistein |first1=Norman |title=On the imaging of reflectors in the Earth: Geophysics |journal=Geophysics |date=1987 |volume=52 |pages=931–942}}
11. ^{{cite book |last1=Lailly |first1=Patrick |title=The seismic inverse problem as a sequence of before stack migrations |date=1983 |publisher=SIAM |location=Philadelphia |isbn=0-89871-190-8 |pages=206-220}}
12. ^{{cite journal |last1=Tarantola |first1=Albert |title=Inversion of Seismic Reflection Data in the Acoustic Approximation |journal=Geophysics |date=1984 |volume=49 |issue=8 |pages=1259-1266}}
13. ^{{cite journal |last1=Nemeth |first1=Tamas |last2=Wu |first2=Chengjun |last3=Schuster |first3=Gerard |title=Least‐squares migration of incomplete reflection data |journal=Geophysics |date=1999 |volume=64 |issue=1 |pages=208-221}}
14. ^{{cite journal |last1=Marsh |first1=Tom |title=Doppler tomography |journal=Astrophysics and Space Science |date=2005 |issue=296 |pages=403–415 |doi=https://doi.org/10.1007/s10509-005-4859-3 |url=https://doi.org/10.1007/s10509-005-4859-3}}
15. ^{{cite journal|last1=Tahmasebi|first1=Pejman|last2=Javadpour|first2=Farzam|last3=Sahimi|first3=Muhammad|title=Stochastic shale permeability matching: Three-dimensional characterization and modeling|journal=International Journal of Coal Geology|date=August 2016|volume=165|pages=231–242|doi=10.1016/j.coal.2016.08.024|url=https://www.researchgate.net/publication/307626119}}
16. ^Kaipio, J., & Somersalo, E. (2010). Statistical and computational inverse problems. New York, NY: Springer.
17. ^{{cite book|chapter-url=http://www.ipgp.fr/~tarantola/Files/Professional/Books/InverseProblemTheory.pdf|title=Inverse Problem Theory and Methods for Model Parameter Estimation|pages=i-xii|first=Albert|last=Tarantola|publisher=SIAM|via=epubs.siam.org|doi=10.1137/1.9780898717921.fm|chapter=Front Matter|year=2005|isbn=978-0-89871-572-9}}
18. ^{{cite book |last1=Ciarlet |first1=Philippe |title=Introduction à l'analyse numérique matricielle et à l'optimisation |date=1994 |publisher=Masson |location=Paris |isbn=9782225688935}}
19. ^{{cite book |last1=Nocedal |first1=Jorge |title=Numerical optimization |date=2006 |publisher=Springer}}
20. ^{{cite book |last1=Lemaréchal |first1=Claude |title=Optimization, Handbooks in Operations Research and Management Science |date=1989 |publisher=Elsevier |pages=529 - 572}}
21. ^{{cite journal |last1=Delprat-Jannaud |first1=Florence |last2=Lailly |first2=Patrick |title=Ill posed and well posed formulations of the reflection tomography problem |journal=Journal of Geophysical Research |date=1993 |volume=98 |pages=6589-6605}}
22. ^{{cite journal |last1=Delprat-Jannaud |first1=Florence |last2=Lailly |first2=Patrick |title=What information on the Earth model do reflection traveltimes provide |journal=Journal of Geophysical Research |date=1992 |volume=98 |pages=827-844}}
23. ^{{cite journal |last1=Backus |first1=George |last2=Gilbert |first2=Freeman |title=The Resolving Power of Gross Earth Data |journal=Geophysical Journal of the Royal Astronomical Society |date=1968 |volume=16 |issue=10 |pages=169-205}}
24. ^{{cite journal |last1=Lions |first1=Jacques Louis |title=Sur les sentinelles des systèmes distribués |journal=CRAS |date=1988 |volume=307}}
25. ^{{cite journal |last1=Pijpers |first1=Frank |last2=Thompson |first2=Michael |title=The SOLA method for helioseismic inversion |journal=Astronomy and Astrophysics |date=1993 |volume=281 |issue=12 |pages=231-240}}
26. ^{{cite book |last1=Chavent |first1=Guy |title=Equations aux dérivées partielles et applications |date=1998 |publisher=Gauthier Villars |location=Paris |pages=345–356}}
27. ^{{cite book|author=Carl Wunsch|title=The Ocean Circulation Inverse Problem|url=https://books.google.com/books?id=ugHsLF1RNacC&pg=PR9|date=13 June 1996|publisher=Cambridge University Press|isbn=978-0-521-48090-1|pages=9–}}
28. ^{{cite journal|last1=Tahmasebi|first1=Pejman|last2=Javadpour|first2=Farzam|last3=Sahimi|first3=Muhammad|title=Stochastic shale permeability matching: Three-dimensional characterization and modeling|journal=International Journal of Coal Geology|date=August 2016|volume=165|pages=231–242|doi=10.1016/j.coal.2016.08.024}}
29. ^{{cite book|author=Patric Figueiredo|title=Development Of An Iterative Method For Solving Multidimensional Inverse Heat Conduction Problems|url=https://www.academia.edu/9823088|date=December 2014|publisher=Lehrstuhl für Wärme- und Stoffübertragung RWTH Aachen}}
30. ^{{cite book|author=G.E. Stavroulakis|title=Inverse and Crack Identification Problems in Engineering Mechanics|url=https://www.springer.com/gp/book/9780792366904|publisher=Springer|isbn=978-0-7923-6690-4|year=2001}}
31. ^{{cite book|author=Z. Mróz, G.E. Stavroulakis|title=Parameter Identification of Materials and Structures|url=https://www.springer.com/gp/book/9783211301517|publisher=Springer|isbn=978-3-211-30151-7|date=2005-11-24}}
32. ^{{cite web|url=http://www.reference-global.com/loi/jiip|title=Journal of Inverse and Ill-posed Problems|publisher=}}
33. ^{{cite web|url=http://www.tandf.co.uk/journals/titles/17415977.asp|title=Inverse Problems in Science and Engineering: Vol 25, No 4|publisher=}}
34. ^{{cite web|url=http://aimsciences.org/journals/ipi/ipi_online.jsp |title=IPI |dead-url=yes |archive-url=https://web.archive.org/web/20061011090005/http://aimsciences.org/journals/ipi/ipi_online.jsp |archive-date=11 October 2006 |df=}}

References

  • Chadan, Khosrow & Sabatier, Pierre Célestin (1977). Inverse Problems in Quantum Scattering Theory. Springer-Verlag. {{ISBN|0-387-08092-9}}
  • Aster, Richard; Borchers, Brian, and Thurber, Clifford (2018). Parameter Estimation and Inverse Problems, Third Edition, Elsevier. {{ISBN|9780128134238}}, {{ISBN|9780128134238}}
  • {{cite book|last1=Press |first1=WH |last2=Teukolsky |first2=SA |last3=Vetterling |first3=WT |last4=Flannery |first4=BP |year=2007 |title=Numerical Recipes: The Art of Scientific Computing |edition=3rd |publisher=Cambridge University Press |location=New York |isbn=978-0-521-88068-8 |chapter=Section 19.4. Inverse Problems and the Use of A Priori Information |chapter-url=http://apps.nrbook.com/empanel/index.html#pg=1001}}

Further reading

  • {{cite book|author=C. W. Groetsch|title=Inverse Problems: Activities for Undergraduates|year=1999|publisher=Cambridge University Press|isbn=978-0-88385-716-8}}

External links

  • Inverse Problems International Association
  • Eurasian Association on Inverse Problems
  • Finnish Inverse Problems Society
  • Inverse Problems Network
  • Albert Tarantola's website, inclInverse Problems page at the University of Alabama uding a free PDF version of his Inverse Problem Theory book, and some online articles on Inverse Problems]
  • Inverse Problems and Geostatistics Project, Niels Bohr Institute, University of Copenhagen
  • Andy Ganse's Geophysical Inverse Theory Resources Page
  • Finnish Centre of Excellence in Inverse Problems Research
{{Authority control}}{{DEFAULTSORT:Inverse Problem}}

1 : Inverse problems

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/21 17:32:28