Answer:
a)  
  
For this case the value for  is always smaller than the value of a, assuming
 is always smaller than the value of a, assuming ![X_i \sim Unif[0,a]](https://tex.z-dn.net/?f=%20X_i%20%5Csim%20Unif%5B0%2Ca%5D) So then for this case it cannot be unbiased because an unbiased estimator satisfy this property:
 So then for this case it cannot be unbiased because an unbiased estimator satisfy this property: 
 and that's not our case.
 and that's not our case. 
b) 
Since is a negative value we can conclude that underestimate the real value a. 
 
c) 
And assuming independence we have this:
![P(Y \leq y) = P(X_1 \leq y) P(X_2 \leq y) .... P(X_n \leq y) = [P(X_1 \leq y)]^n = (\frac{y}{a})^n](https://tex.z-dn.net/?f=P%28Y%20%5Cleq%20y%29%20%3D%20P%28X_1%20%5Cleq%20y%29%20P%28X_2%20%5Cleq%20y%29%20....%20P%28X_n%20%5Cleq%20y%29%20%3D%20%5BP%28X_1%20%5Cleq%20y%29%5D%5En%20%3D%20%28%5Cfrac%7By%7D%7Ba%7D%29%5En)
![f_Y (Y) = n (\frac{y}{a})^{n-1} * \frac{1}{a}= \frac{n}{a^n} y^{n-1} , y \in [0,a]](https://tex.z-dn.net/?f=%20f_Y%20%28Y%29%20%3D%20n%20%28%5Cfrac%7By%7D%7Ba%7D%29%5E%7Bn-1%7D%20%2A%20%5Cfrac%7B1%7D%7Ba%7D%3D%20%5Cfrac%7Bn%7D%7Ba%5En%7D%20y%5E%7Bn-1%7D%20%2C%20y%20%5Cin%20%5B0%2Ca%5D)
e) On this case we see that the estimator  is better than
 is better than  and the reason why is because:
 and the reason why is because:


 and that's satisfied for n>1.
 and that's satisfied for n>1.
Step-by-step explanation:
Part a
For this case we are assuming  
 
And we are are ssuming the following estimator:
 
  
For this case the value for  is always smaller than the value of a, assuming
 is always smaller than the value of a, assuming ![X_i \sim Unif[0,a]](https://tex.z-dn.net/?f=%20X_i%20%5Csim%20Unif%5B0%2Ca%5D) So then for this case it cannot be unbiased because an unbiased estimator satisfy this property:
 So then for this case it cannot be unbiased because an unbiased estimator satisfy this property:
 and that's not our case.
 and that's not our case. 
Part b
For this case we assume that the estimator is given by:

And using the definition of bias we have this:

Since is a negative value we can conclude that underestimate the real value a. 
And when we take the limit when n tend to infinity we got that the bias tend to 0.

Part c
For this case we the followng random variable  and we can find the cumulative distribution function like this:
 and we can find the cumulative distribution function like this:

And assuming independence we have this:
![P(Y \leq y) = P(X_1 \leq y) P(X_2 \leq y) .... P(X_n \leq y) = [P(X_1 \leq y)]^n = (\frac{y}{a})^n](https://tex.z-dn.net/?f=P%28Y%20%5Cleq%20y%29%20%3D%20P%28X_1%20%5Cleq%20y%29%20P%28X_2%20%5Cleq%20y%29%20....%20P%28X_n%20%5Cleq%20y%29%20%3D%20%5BP%28X_1%20%5Cleq%20y%29%5D%5En%20%3D%20%28%5Cfrac%7By%7D%7Ba%7D%29%5En)
Since all the random variables have the same distribution.  
Now we can find the density function derivating the distribution function like this:
![f_Y (Y) = n (\frac{y}{a})^{n-1} * \frac{1}{a}= \frac{n}{a^n} y^{n-1} , y \in [0,a]](https://tex.z-dn.net/?f=%20f_Y%20%28Y%29%20%3D%20n%20%28%5Cfrac%7By%7D%7Ba%7D%29%5E%7Bn-1%7D%20%2A%20%5Cfrac%7B1%7D%7Ba%7D%3D%20%5Cfrac%7Bn%7D%7Ba%5En%7D%20y%5E%7Bn-1%7D%20%2C%20y%20%5Cin%20%5B0%2Ca%5D)
Now we can find the expected value for the random variable Y and we got this:

And the bias is given by:

And again since the bias is not 0 we have a biased estimator. 
Part e
For this case we have two estimators with the following variances:


On this case we see that the estimator  is better than
 is better than  and the reason why is because:
 and the reason why is because:


 and that's satisfied for n>1.
 and that's satisfied for n>1.