Show simple item record

contributor authorغلامرضا محتشمی برزادرانen
contributor authorGholam Reza Mohtashami Borzadaranfa
date accessioned2020-06-06T14:17:49Z
date available2020-06-06T14:17:49Z
date copyright1/28/2015
date issued2015
identifier urihttps://libsearch.um.ac.ir:443/fum/handle/fum/3390605?show=full
description abstractThe extension of notion for the measure of information with application in communication theory

back to the experiences of the C. E. Shannon during the second World War II. In 1948, he introduced that

the entropy is a real number associated with a random variable which is equal to the expected value of the

surprise that we would receive upon getting a realization of the random variable. Let, Sa(p) = −loga p

(base-2 is often used) be the measure of surprise we feel when an event with the probability p is occurring

actually occurs. Then, entropy for a random variable is calculated using the probability mass (or pdf)

function of the random variable via Ha(X) = E[Sa(p(X))]. Some of the properties and characterizations

of the Shannon entropy and its extension versions are mentioned here.

Also, finding expressions for the multivariate distributions (discrete or continuous) and information measure

such as mutual information with some of their properties and discussing in view of copula are

reviewed.

The principle of maximum entropy provides a method to select the unknown pdf (or pmf) compatible

to entropy under a specified constraint. This idea was introduced by Jaynes 1957 and obtained via a

theorem by Kagan et al. (1973). Applying to maximum Renyi or Tsallis entropy and also φ−entropy, as

a general format subsume many special cases. Similar arguments are applicable to a multivariate set-up.

In probability theory and information theory, the Kullback Leibler divergence (also information divergence,

information gain, relative entropy) is a non-symmetric measure of the difference between two

probability distributions. Specifically, the Kullback Leibler divergence is typically represents the ”true”

distribution of data and a theoretical model for approximation of the true distribution. Although it

is often intuited as a metric or distance, the KL divergence is not a true metric. Various applications

in statistics and properties of it is one of our aim in here. The link between maximum likelihood and

maximum entropy and Kullback Leibler information is important for a discussion which is coming in

this note. There are several types of information divergence measure that are studied in literature as

extensions of the Shannon entropy and Kullback Leibler information. Some of them can be collected in

Csiszar φ−divergence as special cases. So, minimization of them is important and finding these optimal

measures is the other direction that is discussed in this paper with the related special states such as Kullback

Leibler information, χ2-divergence, total variation, squared perimeter distance, Renyi divergence,

Hellinger distance, directed divergence and so on.
en
languageEnglish
titleEntropy and information (divergence) measuresen
typeConference Paper
contenttypeExternal Fulltext
subject keywordsEntropyen
subject keywordsMaximum entropyen
subject keywordsKullback Leibler informationen
subject keywordsInformation measuresen
subject keywordsMinimization

of Kullback Leibler information
en
identifier linkhttps://profdoc.um.ac.ir/paper-abstract-1047292.html
conference titleدومین کارگاه اندازه های اطلاعات و کاربردهای آنfa
conference locationمشهدfa
identifier articleid1047292


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record