请输入您要查询的百科知识:

 

词条 Computational photography
释义

  1. Computational illumination

  2. Computational optics

  3. Computational imaging

     Common techniques  

  4. Computational processing

  5. Computational sensors

  6. Early work in computer vision

  7. Art history

  8. See also

  9. References

  10. External links

{{broader|Computational imaging}}

Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas,[6] high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing (or "post focus"). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.

The definition of computational photography has evolved to cover a number of

subject areas in computer graphics, computer vision, and applied

optics. These areas are given below, organized according to a taxonomy

proposed by Shree K. Nayar{{cn|date=November 2017}}. Within each area is a list of techniques, and for

each technique one or two representative papers or books are cited.

Deliberately omitted from the

taxonomy are image processing (see also digital image processing)

techniques applied to traditionally captured

images in order to produce better images. Examples of such techniques are

image scaling, dynamic range compression (i.e. tone mapping),

color management, image completion (a.k.a. inpainting or hole filling),

image compression, digital watermarking, and artistic image effects.

Also omitted are techniques that produce range data,

volume data, 3D models, 4D light fields,

4D, 6D, or 8D BRDFs, or other high-dimensional image-based representations. Epsilon Photography is a sub-field of computational photography.

Computational illumination

This is controlling photographic illumination in a structured fashion, then processing the captured images,

to create new images. The applications include image-based relighting, image enhancement, image deblurring, geometry/material recovery and so forth.

High-dynamic-range imaging uses differently exposed pictures of the same scene to extend dynamic range.[7] Other examples include processing and merging differently illuminated images of the same subject matter ("lightspace").

Computational optics

This is capture of optically coded images, followed by computational decoding to produce new images.

Coded aperture imaging was mainly applied in astronomy or X-ray imaging to boost the image quality. Instead of a single pin-hole, a pinhole pattern is applied in imaging, and deconvolution is performed to recover the image.[8] In coded exposure imaging, the on/off state of the shutter is coded to modify the kernel of motion blur.[9] In this way motion deblurring becomes a well-conditioned problem. Similarly, in a lens based coded aperture, the aperture can be modified by inserting a broadband mask.[10] Thus, out of focus deblurring becomes a well-conditioned problem. The coded aperture can also improve the quality in light field acquisition using Hadamard transform optics.

Coded aperture patterns can also be designed using color filters, in order to apply different codes at different wavelengths.[11][12] This allows to increase the amount of light that reaches the camera sensor, compared to binary masks.

Computational imaging

Computational imaging is a set of imaging techniques that combine data acquisition and data processing to create the image of an object through indirect means to yield enhanced resolution, additional information such as optical phase or 3D reconstruction. The information is often recorded without using a conventional optical microscope configuration or with limited datasets.

Computational imaging allows to go beyond physical limitations of optical systems, such as numerical aperture

[13], or even obliterates the need for optical elements

[14].

For parts of the optical spectrum where imaging elements such as objectives are difficult to manufacture or image sensors cannot be miniaturized, computational imaging provides useful alternatives, in fields such as X-Ray[15] and THz radiations.

Common techniques

Among common computational imaging techniques are lensless imaging, computational speckle imaging[16], ptychography and Fourier ptychography.

Computational imaging technique often draws on compressive sensing or phase retrieval techniques, where the angular spectrum of the object is being reconstructed. Other techniques are related to the field of computational imaging, such as digital holography, computer vision and inverse problems such as tomography.

Computational processing

This is processing of non-optically-coded images to produce new images.

Computational sensors

These are detectors that combine sensing and processing, typically in hardware, like the oversampled binary image sensor.

Early work in computer vision

Although computational photography is a currently popular buzzword in computer graphics, many of its

techniques first appeared in the computer vision literature,

either under other names or within papers aimed at 3D shape analysis.

Art history

Computational photography, as an art form, has been practiced by capture of differently exposed pictures of the same subject matter, and combining them together. This was the inspiration for the development of the wearable computer in the 1970s and early 1980s. Computational photography was inspired by the work of Charles Wyckoff, and thus computational photography datasets (e.g. differently exposed pictures of the same subject matter that are taken in order to make a single composite image) are sometimes referred to as Wyckoff Sets, in his honor.

Early work in this area (joint estimation of image projection and exposure value) was undertaken by Mann and Candoccia.

Charles Wyckoff devoted much of his life to creating special kinds of 3-layer photographic films that captured different exposures of the same subject matter. A picture of a nuclear explosion, taken on Wyckoff's film, appeared on the cover of Life Magazine and showed the dynamic range from dark outer areas to inner core.

See also

  • Adaptive Optics
  • Multispectral imaging
  • Simultaneous localization and mapping
  • Super-resolution microscopy
  • Time-of-flight camera

References

1. ^Steve Mann. "Compositing Multiple Pictures of the Same Scene", Proceedings of the 46th Annual Imaging Science & Technology Conference, May 9–14, Cambridge, Massachusetts, 1993
2. ^S. Mann, C. Manders, and J. Fung, "The Lightspace Change Constraint Equation (LCCE) with practical application to estimation of the projectivity+gain transformation between multiple pictures of the same subject matter" IEEE International Conference on Acoustics, Speech, and Signal Processing, 6–10 April 2003, pp III - 481-4 vol.3.
3. ^joint parameter estimation in both domain and range of functions in same orbit of the projective-Wyckoff group" ", IEEE International Conference on Image Processing,Vol.3, 16-19,pp.193-196 September 1996
4. ^Frank M. Candocia: Jointly registering images in domain and range by piecewise linear comparametric analysis. IEEE Transactions on Image Processing 12(4): 409-419 (2003)
5. ^Frank M. Candocia: Simultaneous homographic and comparametric alignment of multiple exposure-adjusted pictures of the same scene. IEEE Transactions on Image Processing 12(12): 1485-1494 (2003)
6. ^Steve Mann and R. W. Picard. "Virtual bellows: constructing high-quality images from video.", In Proceedings of the IEEE First International Conference on Image ProcessingAustin, Texas, November 13–16, 1994
7. ^ON BEING `UNDIGITAL' WITH DIGITAL CAMERAS: EXTENDING DYNAMIC RANGE BY COMBINING DIFFERENTLY EXPOSED PICTURES, IS&T's (Society for Imaging Science and Technology's) 48th annual conference, Cambridge, Massachusetts, May 1995, pages 422-428
8. ^{{cite web|last1=Martinello|first1=Manuel|title=Coded Aperture Imaging|url=http://www.manemarty.com/Publications_files/Martinello_PhDThesis_small.pdf}}
9. ^{{cite web |url=http://web.media.mit.edu/~raskar/deblur/ |title=Coded Exposure Photography: Motion Deblurring using Fluttered Shutter |first1=Ramesh |last1=Raskar |first2=Amit |last2=Agrawal |first3=Jack |last3=Tumblin |year=2006 |accessdate=November 29, 2010}}
10. ^{{cite web |url=http://web.media.mit.edu/~raskar/Mask/ |title=Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing |first1=Ashok |last1=Veeraraghavan |first2=Ramesh |last2=Raskar |first3=Amit |last3=Agrawal |first4=Ankit |last4=Mohan |first5=Jack |last5=Tumblin |year=2007 |accessdate=November 29, 2010}}
11. ^{{Cite journal|last=Martinello|first=Manuel|last2=Wajs|first2=Andrew|last3=Quan|first3=Shuxue|last4=Lee|first4=Hank|last5=Lim|first5=Chien|last6=Woo|first6=Taekun|last7=Lee|first7=Wonho|last8=Kim|first8=Sang-Sik|last9=Lee|first9=David|year=2015|title=Dual Aperture Photography: Image and Depth from a Mobile Camera|url=http://www.manemarty.com/Publications_files/Martinello_ICCP2015_small.pdf|journal=International Conference on Computational Photography|volume=|pages=|via=}}
12. ^{{Cite journal|last=Chakrabarti|first=A.|last2=Zickler|first2=T.|year=2012|title=Depth and deblurring from a spectrally-varying depth-of-field.|url=|journal=IEEE European Conference on Computer Vision|volume=7576|pages=648–666|via=}}
13. ^Ou et al., [https://doi.org/10.1364/OE.23.003472 "High numerical aperture Fourier ptychography: principle, implementation and characterization"]Optics Express 23, 3 (2015)
14. ^Boominathan et al., [https://www.ece.rice.edu/~vb10/documents/2016/Lensless_Imaging_Computaitonal_Renaissance.pdf "Lensless Imaging: A Computational Renaissance"] (2016)
15. ^Miyakawa et al., [https://dx.doi.org/10.1364/OE.22.019803 "Coded aperture detector : an image sensor with sub 20-nm pixel resolution"],
Optics Express 22, 16 (2014)
16. ^Katz et al., [https://dx.doi.org/10.1038/nphoton.2014.189 "Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations"],
Nature Photonics 8, 784–790 (2014)

External links

  • Nayar, Shree K. (2007). "Computational Cameras", Conference on Machine Vision Applications.
  • [https://www.amazon.com/dp/1568813139/ Computational Photography (Raskar, R., Tumblin, J.,)], A.K. Peters. In press.
  • Special issue on Computational Photography, IEEE Computer, August 2006.
  • Camera Culture and Computational Journalism: Capturing and Sharing Visual Experiences, IEEE CG&A Special Issue, Feb 2011.
  • Rick Szeliski (2010), Computer Vision: Algorithms and Applications, Springer.
  • Computational Photography: Methods and Applications (Ed. Rastislav Lukac), CRC Press, 2010.
  • Intelligent Image Processing (John Wiley and Sons book information).
  • Comparametric Equations.
  • GJB-1: Increasing the dynamic range of a digital camera by using the Wyckoff principle
  • Examples of wearable computational photography as an art form
  • [https://web.archive.org/web/20060827204747/http://www.merl.com/people/raskar/photo/ Siggraph Course in Computational Photography]

3 : Digital photography|Computational fields of study|Computer vision

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/27 15:26:48