•  Persian
    • Persian
    • English
  •   ورود
  • دانشگاه فردوسی مشهد
  • |
  • مرکز اطلاع‌رسانی و کتابخانه مرکزی
    • Persian
    • English
  • خانه
  • انواع منابع
    • مقاله مجله
    • کتاب الکترونیکی
    • مقاله همایش
    • استاندارد
    • پروتکل
    • پایان‌نامه
  • راهنمای استفاده
View Item 
  •   کتابخانه دیجیتال دانشگاه فردوسی مشهد
  • Fum
  • Articles
  • ProfDoc
  • View Item
  •   کتابخانه دیجیتال دانشگاه فردوسی مشهد
  • Fum
  • Articles
  • ProfDoc
  • View Item
  • همه
  • عنوان
  • نویسنده
  • سال
  • ناشر
  • موضوع
  • عنوان ناشر
  • ISSN
  • شناسه الکترونیک
  • شابک
جستجوی پیشرفته
JavaScript is disabled for your browser. Some features of this site may not work without it.

Entropy and information (divergence) measures

نویسنده:
غلامرضا محتشمی برزادران
,
Gholam Reza Mohtashami Borzadaran
سال
: 2015
چکیده: The extension of notion for the measure of information with application in communication theory

back to the experiences of the C. E. Shannon during the second World War II. In 1948, he introduced that

the entropy is a real number associated with a random variable which is equal to the expected value of the

surprise that we would receive upon getting a realization of the random variable. Let, Sa(p) = −loga p

(base-2 is often used) be the measure of surprise we feel when an event with the probability p is occurring

actually occurs. Then, entropy for a random variable is calculated using the probability mass (or pdf)

function of the random variable via Ha(X) = E[Sa(p(X))]. Some of the properties and characterizations

of the Shannon entropy and its extension versions are mentioned here.

Also, finding expressions for the multivariate distributions (discrete or continuous) and information measure

such as mutual information with some of their properties and discussing in view of copula are

reviewed.

The principle of maximum entropy provides a method to select the unknown pdf (or pmf) compatible

to entropy under a specified constraint. This idea was introduced by Jaynes 1957 and obtained via a

theorem by Kagan et al. (1973). Applying to maximum Renyi or Tsallis entropy and also φ−entropy, as

a general format subsume many special cases. Similar arguments are applicable to a multivariate set-up.

In probability theory and information theory, the Kullback Leibler divergence (also information divergence,

information gain, relative entropy) is a non-symmetric measure of the difference between two

probability distributions. Specifically, the Kullback Leibler divergence is typically represents the ”true”

distribution of data and a theoretical model for approximation of the true distribution. Although it

is often intuited as a metric or distance, the KL divergence is not a true metric. Various applications

in statistics and properties of it is one of our aim in here. The link between maximum likelihood and

maximum entropy and Kullback Leibler information is important for a discussion which is coming in

this note. There are several types of information divergence measure that are studied in literature as

extensions of the Shannon entropy and Kullback Leibler information. Some of them can be collected in

Csiszar φ−divergence as special cases. So, minimization of them is important and finding these optimal

measures is the other direction that is discussed in this paper with the related special states such as Kullback

Leibler information, χ2-divergence, total variation, squared perimeter distance, Renyi divergence,

Hellinger distance, directed divergence and so on.
یو آر آی: http://libsearch.um.ac.ir:80/fum/handle/fum/3390605
کلیدواژه(گان): Entropy,Maximum entropy,Kullback Leibler information,Information measures,Minimization

of Kullback Leibler information
کالکشن :
  • ProfDoc
  • نمایش متادیتا پنهان کردن متادیتا
  • آمار بازدید

    Entropy and information (divergence) measures

Show full item record

contributor authorغلامرضا محتشمی برزادرانen
contributor authorGholam Reza Mohtashami Borzadaranfa
date accessioned2020-06-06T14:17:49Z
date available2020-06-06T14:17:49Z
date copyright1/28/2015
date issued2015
identifier urihttp://libsearch.um.ac.ir:80/fum/handle/fum/3390605?locale-attribute=fa
description abstractThe extension of notion for the measure of information with application in communication theory

back to the experiences of the C. E. Shannon during the second World War II. In 1948, he introduced that

the entropy is a real number associated with a random variable which is equal to the expected value of the

surprise that we would receive upon getting a realization of the random variable. Let, Sa(p) = −loga p

(base-2 is often used) be the measure of surprise we feel when an event with the probability p is occurring

actually occurs. Then, entropy for a random variable is calculated using the probability mass (or pdf)

function of the random variable via Ha(X) = E[Sa(p(X))]. Some of the properties and characterizations

of the Shannon entropy and its extension versions are mentioned here.

Also, finding expressions for the multivariate distributions (discrete or continuous) and information measure

such as mutual information with some of their properties and discussing in view of copula are

reviewed.

The principle of maximum entropy provides a method to select the unknown pdf (or pmf) compatible

to entropy under a specified constraint. This idea was introduced by Jaynes 1957 and obtained via a

theorem by Kagan et al. (1973). Applying to maximum Renyi or Tsallis entropy and also φ−entropy, as

a general format subsume many special cases. Similar arguments are applicable to a multivariate set-up.

In probability theory and information theory, the Kullback Leibler divergence (also information divergence,

information gain, relative entropy) is a non-symmetric measure of the difference between two

probability distributions. Specifically, the Kullback Leibler divergence is typically represents the ”true”

distribution of data and a theoretical model for approximation of the true distribution. Although it

is often intuited as a metric or distance, the KL divergence is not a true metric. Various applications

in statistics and properties of it is one of our aim in here. The link between maximum likelihood and

maximum entropy and Kullback Leibler information is important for a discussion which is coming in

this note. There are several types of information divergence measure that are studied in literature as

extensions of the Shannon entropy and Kullback Leibler information. Some of them can be collected in

Csiszar φ−divergence as special cases. So, minimization of them is important and finding these optimal

measures is the other direction that is discussed in this paper with the related special states such as Kullback

Leibler information, χ2-divergence, total variation, squared perimeter distance, Renyi divergence,

Hellinger distance, directed divergence and so on.
en
languageEnglish
titleEntropy and information (divergence) measuresen
typeConference Paper
contenttypeExternal Fulltext
subject keywordsEntropyen
subject keywordsMaximum entropyen
subject keywordsKullback Leibler informationen
subject keywordsInformation measuresen
subject keywordsMinimization

of Kullback Leibler information
en
identifier linkhttps://profdoc.um.ac.ir/paper-abstract-1047292.html
conference titleدومین کارگاه اندازه های اطلاعات و کاربردهای آنfa
conference locationمشهدfa
identifier articleid1047292
  • درباره ما
نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
DSpace software copyright © 2019-2022  DuraSpace