Skip to main content

[Community Question] Statistics: Can a summation be transferred into the denominator?

One of our user asked:

I include a bit of an introduction, even though my main question is more mathematical. I was tasked with finding the Maximum Likelihood Estimate for $\theta$ in $$\mathrm P(X>x) = \left(\frac ax \right)^\theta $$ where $X$ is a variable, and $x$ represents a value that variable can take on. The Probability Density Function is $\newcommand\diff[2]{\frac{\mathrm d#1}{\mathrm d#2}}\diff Fx=\frac{-\theta a^\theta}{x^{\theta + 1}}$, where $F = \mathrm P(X>x)$. I maximise the loglikelihood function $l = \ln(-\theta) + \theta \ln a - (\theta + 1)\ln x\ $ to get $\hat\theta(x_i) = \frac 1{\ln x_i - \ln a}$, where the $\hat.$ indicates that $\hat\theta$ is an estimate of $\theta$, based on the data sample. Now, the answer is supposed to be $$\hat\theta = \frac 1{\overline {\ln x} - \ln a}$$ where $\overline {\phantom{x}}$ indicates the average: $\overline{\ln x} = \frac 1n \sum_i \ln x_i$. I am stumped as to how to get this answer directly from $\hat\theta(x_i)$.

Does $$\frac 1n \sum_i \frac 1{\ln x_i - \ln a} = \frac 1{\overline {\ln x} - \ln a}\qquad ?$$

I think $\frac 1n \sum_i \widehat{\frac 1{\theta(x_i)}} = \frac 1n\sum_i (\ln x_i - \ln a) =\overline{\ln x} - \ln a = \widehat {\frac 1\theta} \implies \hat\theta = \frac 1{\overline{\ln x} - \ln a}$, but is this the only way to show the above?


Comments

Popular posts from this blog

[Community Question] Calculus: Manifold with boundary - finding the boundary

One of our user asked: I have the manifold with boundary $M:= \lbrace (x_1,x_2,x_3) \in \mathbb R^3 : x_1\geq 0, x_1^2+x_2^2+x_3^2=1\rbrace \cup\lbrace (x_1,x_2,x_3) \in \mathbb R^3 : x_1= 0, x_1^2+x_2^2+x_3^2\leq1\rbrace$ and I need to find the boundary of this manifold. I think it is $\lbrace (x_1,x_2,x_3) \in \mathbb R^n : x_1= 0, x_2^2+x_3^2=1\rbrace$ , the other option is that the boundary is the empty set? I think the first is right? Am I wrong?

[Community Question] Linear-algebra: Are linear transformations between infinite dimensional vector spaces always differentiable?

One of our user asked: In class we saw that every linear transformation is differentiable (since there's always a linear approximation for them) and we also saw that a differentiable function must be continuous, so it must be true that all linear operators are continuous, however, I just read that between infinite dimensional vector spaces this is not necessarily true. I would like to know where's the flaw in my reasoning (I suspect that linear transformations between infinite dimensional vector spaces are not always differentiable).