请输入您要查询的百科知识:

 

词条 Variable kernel density estimation
释义

  1. Rationale

  2. Balloon estimators

  3. Use for statistical classification

  4. External links

  5. References

In statistics, adaptive or "variable-bandwidth" kernel density estimation is a form of kernel density estimation in which the size of the kernels used in the estimate are varied

depending upon either the location of the samples or the location of the test point.

It is a particularly effective technique when the sample space is multi-dimensional.

[1]

Rationale

Given a set of samples, , we wish to estimate the

density, , at a test point, :

where n is the number of samples, K is the

"kernel", h is its width and D is the number of dimensions in .

The kernel can be thought of as a simple, linear filter.

Using a fixed filter width may mean that in regions of low density, all samples

will fall in the tails of the filter with very low weighting, while regions of high

density will find an excessive number of samples in the central region with weighting

close to unity. To fix this problem, we vary the width of the kernel in different

regions of the sample space.

There are two methods of doing this: balloon and pointwise estimation.

In a balloon estimator, the kernel width is varied depending on the location

of the test point. In a pointwise estimator, the kernel width is varied depending

on the location of the sample.[1]

For multivariate estimators, the parameter, h, can be generalized to

vary not just the size, but also the shape of the kernel. This more complicated approach

will not be covered here.

Balloon estimators

A common method of varying the kernel width is to make it inversely proportional to the density at the test point:

where k is a constant.

If we back-substitute the estimated PDF, and assuming a Gaussian kernel function,

we can show that W is a constant:[3]

A similar derivation holds for any kernel whose normalising function is of the order {{math|hD}}, although with a different constant factor in place of the {{math|(2 π)D/2}} term. This produces a generalization of the k-nearest neighbour algorithm.

That is, a uniform kernel function will return the

KNN technique.[2]

There are two components to the error: a variance term and a bias term. The variance term is given as:[1]

.

The bias term is found by evaluating the approximated function in the limit as the kernel

width becomes much larger than the sample spacing. By using a Taylor expansion for the real function, the bias term drops out:

An optimal kernel width that minimizes the error of each estimate can thus be derived.

Use for statistical classification

The method is particularly effective when applied to statistical classification.

There are two ways we can proceed: the first is to compute the PDFs of

each class separately, using different bandwidth parameters,

and then compare them as in Taylor.[3]

Alternatively, we can divide up the sum based on the class of each sample:

where ci is the class of the ith sample.

The class of the test point may be estimated through maximum likelihood.

Many kernels, Gaussian for instance, are smooth. Consequently, estimates

of joint or conditional probabilities are both continuous and differentiable.

This makes it easy to search for a border between two classes by zeroing

the difference between the conditional probabilities:

For example, we can use a one-dimensional root-finding algorithm to zero

R along a line between two samples that straddle the class border.

The border can be thus sampled as many times as necessary.

The border samples along with estimates of the gradients of R

determine the class of a test point through a dot-product:

where sample the

class border and c is the estimated class.

The value of R, which determines the conditional probabilities,

may be extrapolated to the test point:

[2]

Two-class classifications are easy to generalize to multiple classes.

External links

  • akde1d.m - Matlab m-file for one-dimensional adaptive kernel density estimation.
  • libAGF - A C++ library for multivariate adaptive kernel density estimation.
  • akde.m - Matlab function for multivariate (high-dimensional) variable kernel density estimation.

References

1. ^{{Cite journal | author1 = D. G. Terrell | author2 = D. W. Scott | title = Variable kernel density estimation | journal = Annals of Statistics | volume = 20 | pages = 1236–1265 | year = 1992 | doi=10.1214/aos/1176348768}}
2. ^{{Cite journal | last = Mills | first = Peter | title = Efficient statistical classification of satellite measurements | year = 2011 | volume = 32 | issue = 21 | journal = International Journal of Remote Sensing | doi = 10.1080/01431161.2010.507795 | arxiv = 1202.2194}}
3. ^{{Cite journal | last = Taylor | first = Charles | title = Classification and kernel density estimation | journal = Vistas in Astronomy | volume = 41 | issue = 3 | pages = 411–417 | year = 1997 | doi=10.1016/s0083-6656(97)00046-9| bibcode = 1997VA.....41..411T }}

4 : Classification algorithms|Statistical classification|Estimation of densities|Nonparametric statistics

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/12 12:11:14