Hi there what you need is lagrange multipliers for constrained minimisation. It works like this,
V(X)=α2σ2X¯1+β2\sigma2X¯2
Now we want to minimise this subject to α+β=1 or α−β−1=0.
We proceed by writing a function of alpha and beta (the paramters you want to change to minimse the variance of X, but we also introduce another parameter that multiplies the sum to zero constraint. Thus we want to minimise
f(α,β,λ)=α2σ2X¯1+β2σ2X¯2+λ(\alpha−β−1).
We partially differentiate this function w.r.t each parameter and set each partial derivative equal to zero. This gives;
∂f∂α=2ασ2X¯1+λ=0
∂f∂β=2βσ2X¯2+λ=0
∂f∂λ=α+β−1=0
Setting the first two partial derivatives equal we get
α=βσ2X¯2σ2X¯1
Substituting 1−α into this expression for beta and re-arranging for alpha gives the result for alpha. Repeating the same steps but isolating beta gives the beta result.
Lagrange multipliers and constrained minimisation crop up often in stats problems. I hope this helps!And gosh that was a lot to type!xd