When you multiply a whole number by a decimal less than one, you are taking a fraction of the whole number. The resulting number is smaller than the original.
The resulting number is smaller than the original. This is true because if you take 20 times 0.5 it will equal ten. A decimal is pretty much a fraction in another form.
The reason why A is false is because if the number isn't being multiplied by 1 then it can't stay the same.
The reason why B is false is because that is not true as in the example above, 20 times 0.5, it equaled 10 and not less than one.
The reason why C is false is because the the resulting number simply cannot be higher than the original when being multiplied by a decimal smaller than 1.