Class JensenShannonDivergenceDistance

  • All Implemented Interfaces:
    Distance<NumberVector>, NumberVectorDistance<NumberVector>, PrimitiveDistance<NumberVector>, SpatialPrimitiveDistance<NumberVector>
    Direct Known Subclasses:
    SqrtJensenShannonDivergenceDistance

    @Reference(authors="J. Lin",title="Divergence measures based on the Shannon entropy",booktitle="IEEE Transactions on Information Theory 37(1)",url="https://doi.org/10.1109/18.61115",bibkey="DBLP:journals/tit/Lin91") @Reference(authors="D. M. Endres, J. E. Schindelin",title="A new metric for probability distributions",booktitle="IEEE Transactions on Information Theory 49(7)",url="https://doi.org/10.1109/TIT.2003.813506",bibkey="DBLP:journals/tit/EndresS03") @Reference(authors="M.-M. Deza, E. Deza",title="Dictionary of distances",booktitle="Dictionary of distances",url="https://doi.org/10.1007/978-3-642-00234-2",bibkey="doi:10.1007/978-3-642-00234-2")
    public class JensenShannonDivergenceDistance
    extends JeffreyDivergenceDistance
    Jensen-Shannon Divergence for NumberVectors is a symmetric, smoothened version of the KullbackLeiblerDivergenceAsymmetricDistance.

    It essentially is the same as JeffreyDivergenceDistance, only scaled by half. For completeness, we include both.

    \[JS(\vec{x},\vec{y}):=\tfrac12\sum\nolimits_i x_i\log\tfrac{2x_i}{x_i+y_i}+y_i\log\tfrac{2y_i}{x_i+y_i} = \tfrac12 KL(\vec{x},\tfrac12(\vec{x}+\vec{y})) + \tfrac12 KL(\vec{y},\tfrac12(\vec{x}+\vec{y}))\]

    There exists a variable definition where the two vectors are weighted with \(\beta\) and \(1-\beta\), which for the common choice of \(\beta=\tfrac12\) yields this version.

    Reference:

    J. Lin
    Divergence measures based on the Shannon entropy
    IEEE Transactions on Information Theory 37(1)

    D. M. Endres, J. E. Schindelin
    A new metric for probability distributions
    IEEE Transactions on Information Theory 49(7)

    M.-M. Deza, E. Deza
    Dictionary of distances

    Since:
    0.6.0
    Author:
    Erich Schubert