Answer:
![\hat p = \frac{r}{\bar x +r}](https://tex.z-dn.net/?f=%20%5Chat%20p%20%3D%20%5Cfrac%7Br%7D%7B%5Cbar%20x%20%2Br%7D)
Step-by-step explanation:
A negative binomial random variable "is the number X of repeated trials to produce r successes in a negative binomial experiment. The probability distribution of a negative binomial random variable is called a negative binomial distribution, this distribution is known as the Pascal distribution".
And the probability mass function is given by:
Where r represent the number successes after the k failures and p is the probability of a success on any given trial.
Solution to the problem
For this case the likehoof function is given by:
![L(\theta , x_i) = \prod_{i=1}^n f(\theta ,x_i)](https://tex.z-dn.net/?f=%20L%28%5Ctheta%20%2C%20x_i%29%20%3D%20%5Cprod_%7Bi%3D1%7D%5En%20f%28%5Ctheta%20%2Cx_i%29%20)
If we replace the mass function we got:
![L(p, x_i) = \prod_{i=1}^n (x_i +r-1 C k) p^r (1-p)^{x_i}](https://tex.z-dn.net/?f=%20L%28p%2C%20x_i%29%20%3D%20%5Cprod_%7Bi%3D1%7D%5En%20%28x_i%20%2Br-1%20C%20k%29%20p%5Er%20%281-p%29%5E%7Bx_i%7D)
When we take the derivate of the likehood function we got:
![l(p,x_i) = \sum_{i=1}^n [log (x_i +r-1 C k) + r log(p) + x_i log(1-p)]](https://tex.z-dn.net/?f=%20l%28p%2Cx_i%29%20%3D%20%5Csum_%7Bi%3D1%7D%5En%20%5Blog%20%28x_i%20%2Br-1%20C%20k%29%20%2B%20r%20log%28p%29%20%2B%20x_i%20log%281-p%29%5D)
And in order to estimate the likehood estimator for p we need to take the derivate from the last expression and we got:
![\frac{dl(p,x_i)}{dp} = \sum_{i=1}^n \frac{r}{p} -\frac{x_i}{1-p}](https://tex.z-dn.net/?f=%20%5Cfrac%7Bdl%28p%2Cx_i%29%7D%7Bdp%7D%20%3D%20%5Csum_%7Bi%3D1%7D%5En%20%5Cfrac%7Br%7D%7Bp%7D%20-%5Cfrac%7Bx_i%7D%7B1-p%7D)
And we can separete the sum and we got:
![\frac{dl(p,x_i)}{dp} = \sum_{i=1}^n \frac{r}{p} -\sum_{i=1}^n \frac{x_i}{1-p}](https://tex.z-dn.net/?f=%20%5Cfrac%7Bdl%28p%2Cx_i%29%7D%7Bdp%7D%20%3D%20%5Csum_%7Bi%3D1%7D%5En%20%5Cfrac%7Br%7D%7Bp%7D%20-%5Csum_%7Bi%3D1%7D%5En%20%5Cfrac%7Bx_i%7D%7B1-p%7D)
Now we need to find the critical point setting equal to zero this derivate and we got:
![\frac{dl(p,x_i)}{dp} = \sum_{i=1}^n \frac{r}{p} -\sum_{i=1}^n \frac{x_i}{1-p}=0](https://tex.z-dn.net/?f=%20%5Cfrac%7Bdl%28p%2Cx_i%29%7D%7Bdp%7D%20%3D%20%5Csum_%7Bi%3D1%7D%5En%20%5Cfrac%7Br%7D%7Bp%7D%20-%5Csum_%7Bi%3D1%7D%5En%20%5Cfrac%7Bx_i%7D%7B1-p%7D%3D0)
![\sum_{i=1}^n \frac{r}{p} =\sum_{i=1}^n \frac{x_i}{1-p}](https://tex.z-dn.net/?f=%20%5Csum_%7Bi%3D1%7D%5En%20%5Cfrac%7Br%7D%7Bp%7D%20%3D%5Csum_%7Bi%3D1%7D%5En%20%5Cfrac%7Bx_i%7D%7B1-p%7D)
For the left and right part of the expression we just have this using the properties for a sum and taking in count that p is a fixed value:
![\frac{nr}{p}= \frac{\sum_{i=1}^n x_i}{1-p}](https://tex.z-dn.net/?f=%20%5Cfrac%7Bnr%7D%7Bp%7D%3D%20%5Cfrac%7B%5Csum_%7Bi%3D1%7D%5En%20x_i%7D%7B1-p%7D)
Now we need to solve the value of
from the last equation like this:
![nr(1-p) = p \sum_{i=1}^n x_i](https://tex.z-dn.net/?f=%20nr%281-p%29%20%3D%20p%20%5Csum_%7Bi%3D1%7D%5En%20x_i%20)
![nr -nrp =p \sum_{i=1}^n x_i](https://tex.z-dn.net/?f=%20nr%20-nrp%20%3Dp%20%5Csum_%7Bi%3D1%7D%5En%20x_i%20)
![p \sum_{i=1}^n x_i +nrp = nr](https://tex.z-dn.net/?f=p%20%5Csum_%7Bi%3D1%7D%5En%20x_i%20%2Bnrp%20%3D%20nr)
![p[\sum_{i=1}^n x_i +nr]= nr](https://tex.z-dn.net/?f=%20p%5B%5Csum_%7Bi%3D1%7D%5En%20x_i%20%2Bnr%5D%3D%20nr)
And if we solve for
we got:
![\hat p = \frac{nr}{\sum_{i=1}^n x_i +nr}](https://tex.z-dn.net/?f=%20%5Chat%20p%20%3D%20%5Cfrac%7Bnr%7D%7B%5Csum_%7Bi%3D1%7D%5En%20x_i%20%2Bnr%7D)
And if we divide numerator and denominator by n we got:
![\hat p = \frac{r}{\bar x +r}](https://tex.z-dn.net/?f=%20%5Chat%20p%20%3D%20%5Cfrac%7Br%7D%7B%5Cbar%20x%20%2Br%7D)
Since ![\bar x = \frac{\sum_{i=1}^n x_i}{n}](https://tex.z-dn.net/?f=%20%5Cbar%20x%20%3D%20%5Cfrac%7B%5Csum_%7Bi%3D1%7D%5En%20x_i%7D%7Bn%7D)