A Quick Tour of Maximum-Likelihood Estimation

Published on
Scene 1 (0s)

A Quick Tour of Maximum-Likelihood Estimation

Scene 2 (6s)

MLE: What is it?

A tool for understanding where our observations may have come from

Scene 3 (34s)

0 1

0 1

0 1

Alaska

Florida

Observations

“Which One?”

“Given the data, where did they come from?”

Scene 4 (1m 4s)

“Given the data, where did they come from?”

Given the data…

…where did they come from?

Scene 5 (1m 21s)

…where did they come from?

S is a set of parameters that describe the parent distribution from which D is most likely to have come.

Common choices are:

The mean Some measure of spread, say The kurtosis (skew)

Scene 6 (1m 39s)

0 1 Given the data…

…where do we set the so that the likelihood is maximized?

1.47

Scene 7 (2m 19s)

Analytically: Usually not so easy (solve y = f(x) for x) Computationally: OK most of the time

How do we find S?

Scene 8 (2m 42s)

Computational Solution

Brute-force: Work within the hypercube of volume

Scene 9 (3m 14s)

L := + inf S_l := 0 For s in S1 x S2 x S3 do: l := Likelihood(S, D); if (l < L) then: L = l; s _l = s; Return L, s_l # ML and location

Florida

X

Alaska

Scene 10 (4m 0s)

Some drawbacks…

There may be more than one choice of S that maximizes L | S_i | may be large Heuristics such as assuming the parent distribution is of a certain type may quickly lead to the wrong answer “The parent distribution is obviously binomial” “That 5% event will never happen. As a result, we make a Gaussianity assumption.”

Too simple…

Mortgage crisis…