Answer:
a) ![E(\hat \theta_1) =\frac{1}{2} [E(X_1) +E(X_2)]= \frac{1}{2} [\mu + \mu] = \mu](https://tex.z-dn.net/?f=%20E%28%5Chat%20%5Ctheta_1%29%20%3D%5Cfrac%7B1%7D%7B2%7D%20%5BE%28X_1%29%20%2BE%28X_2%29%5D%3D%20%5Cfrac%7B1%7D%7B2%7D%20%5B%5Cmu%20%2B%20%5Cmu%5D%20%3D%20%5Cmu)
So then we conclude that
is an unbiased estimator of
![E(\hat \theta_2) =\frac{1}{4} [E(X_1) +3E(X_2)]= \frac{1}{4} [\mu + 3\mu] = \mu](https://tex.z-dn.net/?f=%20E%28%5Chat%20%5Ctheta_2%29%20%3D%5Cfrac%7B1%7D%7B4%7D%20%5BE%28X_1%29%20%2B3E%28X_2%29%5D%3D%20%5Cfrac%7B1%7D%7B4%7D%20%5B%5Cmu%20%2B%203%5Cmu%5D%20%3D%20%5Cmu)
So then we conclude that
is an unbiased estimator of 
b) ![Var(\hat \theta_1) =\frac{1}{4} [\sigma^2 + \sigma^2 ] =\frac{\sigma^2}{2}](https://tex.z-dn.net/?f=%20Var%28%5Chat%20%5Ctheta_1%29%20%3D%5Cfrac%7B1%7D%7B4%7D%20%5B%5Csigma%5E2%20%2B%20%5Csigma%5E2%20%5D%20%3D%5Cfrac%7B%5Csigma%5E2%7D%7B2%7D%20)
![Var(\hat \theta_2) =\frac{1}{16} [\sigma^2 + 9\sigma^2 ] =\frac{5\sigma^2}{8}](https://tex.z-dn.net/?f=%20Var%28%5Chat%20%5Ctheta_2%29%20%3D%5Cfrac%7B1%7D%7B16%7D%20%5B%5Csigma%5E2%20%2B%209%5Csigma%5E2%20%5D%20%3D%5Cfrac%7B5%5Csigma%5E2%7D%7B8%7D%20)
Step-by-step explanation:
For this case we know that we have two random variables:
both with mean
and variance 
And we define the following estimators:


Part a
In order to see if both estimators are unbiased we need to proof if the expected value of the estimators are equal to the real value of the parameter:

So let's find the expected values for each estimator:

Using properties of expected value we have this:
![E(\hat \theta_1) =\frac{1}{2} [E(X_1) +E(X_2)]= \frac{1}{2} [\mu + \mu] = \mu](https://tex.z-dn.net/?f=%20E%28%5Chat%20%5Ctheta_1%29%20%3D%5Cfrac%7B1%7D%7B2%7D%20%5BE%28X_1%29%20%2BE%28X_2%29%5D%3D%20%5Cfrac%7B1%7D%7B2%7D%20%5B%5Cmu%20%2B%20%5Cmu%5D%20%3D%20%5Cmu)
So then we conclude that
is an unbiased estimator of 
For the second estimator we have:

Using properties of expected value we have this:
![E(\hat \theta_2) =\frac{1}{4} [E(X_1) +3E(X_2)]= \frac{1}{4} [\mu + 3\mu] = \mu](https://tex.z-dn.net/?f=%20E%28%5Chat%20%5Ctheta_2%29%20%3D%5Cfrac%7B1%7D%7B4%7D%20%5BE%28X_1%29%20%2B3E%28X_2%29%5D%3D%20%5Cfrac%7B1%7D%7B4%7D%20%5B%5Cmu%20%2B%203%5Cmu%5D%20%3D%20%5Cmu)
So then we conclude that
is an unbiased estimator of 
Part b
For the variance we need to remember this property: If a is a constant and X a random variable then:

For the first estimator we have:

![Var(\hat \theta_1) =\frac{1}{4} Var(X_1 +X_2)=\frac{1}{4} [Var(X_1) + Var(X_2) + 2 Cov (X_1 , X_2)]](https://tex.z-dn.net/?f=%20Var%28%5Chat%20%5Ctheta_1%29%20%3D%5Cfrac%7B1%7D%7B4%7D%20Var%28X_1%20%2BX_2%29%3D%5Cfrac%7B1%7D%7B4%7D%20%5BVar%28X_1%29%20%2B%20Var%28X_2%29%20%2B%202%20Cov%20%28X_1%20%2C%20X_2%29%5D%20)
Since both random variables are independent we know that
so then we have:
![Var(\hat \theta_1) =\frac{1}{4} [\sigma^2 + \sigma^2 ] =\frac{\sigma^2}{2}](https://tex.z-dn.net/?f=%20Var%28%5Chat%20%5Ctheta_1%29%20%3D%5Cfrac%7B1%7D%7B4%7D%20%5B%5Csigma%5E2%20%2B%20%5Csigma%5E2%20%5D%20%3D%5Cfrac%7B%5Csigma%5E2%7D%7B2%7D%20)
For the second estimator we have:

![Var(\hat \theta_2) =\frac{1}{16} Var(X_1 +3X_2)=\frac{1}{4} [Var(X_1) + Var(3X_2) + 2 Cov (X_1 , 3X_2)]](https://tex.z-dn.net/?f=%20Var%28%5Chat%20%5Ctheta_2%29%20%3D%5Cfrac%7B1%7D%7B16%7D%20Var%28X_1%20%2B3X_2%29%3D%5Cfrac%7B1%7D%7B4%7D%20%5BVar%28X_1%29%20%2B%20Var%283X_2%29%20%2B%202%20Cov%20%28X_1%20%2C%203X_2%29%5D%20)
Since both random variables are independent we know that
so then we have:
![Var(\hat \theta_2) =\frac{1}{16} [\sigma^2 + 9\sigma^2 ] =\frac{5\sigma^2}{8}](https://tex.z-dn.net/?f=%20Var%28%5Chat%20%5Ctheta_2%29%20%3D%5Cfrac%7B1%7D%7B16%7D%20%5B%5Csigma%5E2%20%2B%209%5Csigma%5E2%20%5D%20%3D%5Cfrac%7B5%5Csigma%5E2%7D%7B8%7D%20)