Answer:
(A)
Step-by-step explanation:
To be honest, since all of them are different, as long as you get the first one right, you will get the others right. A is your answer, for the following reasons:
Reflexive Property: The Reflexive property states that the mirrored version of itself will always be congruent with itself. Generally, to make it easier, they will write it opposite (as they did in the given proof):
<em>line</em> AC ≅ <em>line</em> CA
~
For this function we can find y-intercept.
x=0, y=-2
This graph is on the top, right.
Answer:
w= -5
Step-by-step explanation:
- -2w - 4 + 4 = 6 + 4
- -2w = 10
Isolate "w":
Answer:
(a)![E[X+Y]=E[X]+E[Y]](https://tex.z-dn.net/?f=E%5BX%2BY%5D%3DE%5BX%5D%2BE%5BY%5D)
(b)
Step-by-step explanation:
Let X and Y be discrete random variables and E(X) and Var(X) are the Expected Values and Variance of X respectively.
(a)We want to show that E[X + Y ] = E[X] + E[Y ].
When we have two random variables instead of one, we consider their joint distribution function.
For a function f(X,Y) of discrete variables X and Y, we can define
![E[f(X,Y)]=\sum_{x,y}f(x,y)\cdot P(X=x, Y=y).](https://tex.z-dn.net/?f=E%5Bf%28X%2CY%29%5D%3D%5Csum_%7Bx%2Cy%7Df%28x%2Cy%29%5Ccdot%20P%28X%3Dx%2C%20Y%3Dy%29.)
Since f(X,Y)=X+Y
![E[X+Y]=\sum_{x,y}(x+y)P(X=x,Y=y)\\=\sum_{x,y}xP(X=x,Y=y)+\sum_{x,y}yP(X=x,Y=y).](https://tex.z-dn.net/?f=E%5BX%2BY%5D%3D%5Csum_%7Bx%2Cy%7D%28x%2By%29P%28X%3Dx%2CY%3Dy%29%5C%5C%3D%5Csum_%7Bx%2Cy%7DxP%28X%3Dx%2CY%3Dy%29%2B%5Csum_%7Bx%2Cy%7DyP%28X%3Dx%2CY%3Dy%29.)
Let us look at the first of these sums.
![\sum_{x,y}xP(X=x,Y=y)\\=\sum_{x}x\sum_{y}P(X=x,Y=y)\\\text{Taking Marginal distribution of x}\\=\sum_{x}xP(X=x)=E[X].](https://tex.z-dn.net/?f=%5Csum_%7Bx%2Cy%7DxP%28X%3Dx%2CY%3Dy%29%5C%5C%3D%5Csum_%7Bx%7Dx%5Csum_%7By%7DP%28X%3Dx%2CY%3Dy%29%5C%5C%5Ctext%7BTaking%20Marginal%20distribution%20of%20x%7D%5C%5C%3D%5Csum_%7Bx%7DxP%28X%3Dx%29%3DE%5BX%5D.)
Similarly,
![\sum_{x,y}yP(X=x,Y=y)\\=\sum_{y}y\sum_{x}P(X=x,Y=y)\\\text{Taking Marginal distribution of y}\\=\sum_{y}yP(Y=y)=E[Y].](https://tex.z-dn.net/?f=%5Csum_%7Bx%2Cy%7DyP%28X%3Dx%2CY%3Dy%29%5C%5C%3D%5Csum_%7By%7Dy%5Csum_%7Bx%7DP%28X%3Dx%2CY%3Dy%29%5C%5C%5Ctext%7BTaking%20Marginal%20distribution%20of%20y%7D%5C%5C%3D%5Csum_%7By%7DyP%28Y%3Dy%29%3DE%5BY%5D.)
Combining these two gives the formula:

Therefore:
![E[X+Y]=E[X]+E[Y] \text{ as required.}](https://tex.z-dn.net/?f=E%5BX%2BY%5D%3DE%5BX%5D%2BE%5BY%5D%20%5Ctext%7B%20%20as%20required.%7D)
(b)We want to show that if X and Y are independent random variables, then:

By definition of Variance, we have that:
![Var(X+Y)=E(X+Y-E[X+Y]^2)](https://tex.z-dn.net/?f=Var%28X%2BY%29%3DE%28X%2BY-E%5BX%2BY%5D%5E2%29)
![=E[(X-\mu_X +Y- \mu_Y)^2]\\=E[(X-\mu_X)^2 +(Y- \mu_Y)^2+2(X-\mu_X)(Y- \mu_Y)]\\$Since we have shown that expectation is linear$\\=E(X-\mu_X)^2 +E(Y- \mu_Y)^2+2E(X-\mu_X)(Y- \mu_Y)]\\=E[(X-E(X)]^2 +E[Y- E(Y)]^2+2Cov (X,Y)](https://tex.z-dn.net/?f=%3DE%5B%28X-%5Cmu_X%20%20%2BY-%20%5Cmu_Y%29%5E2%5D%5C%5C%3DE%5B%28X-%5Cmu_X%29%5E2%20%20%2B%28Y-%20%5Cmu_Y%29%5E2%2B2%28X-%5Cmu_X%29%28Y-%20%5Cmu_Y%29%5D%5C%5C%24Since%20we%20have%20shown%20that%20expectation%20is%20linear%24%5C%5C%3DE%28X-%5Cmu_X%29%5E2%20%20%2BE%28Y-%20%5Cmu_Y%29%5E2%2B2E%28X-%5Cmu_X%29%28Y-%20%5Cmu_Y%29%5D%5C%5C%3DE%5B%28X-E%28X%29%5D%5E2%20%20%2BE%5BY-%20E%28Y%29%5D%5E2%2B2Cov%20%28X%2CY%29)
Since X and Y are independent, Cov(X,Y)=0

Therefore as required:
