Expected Value and Variance of a Binomial Distribution
(The Short Way)

Recalling that with regard to the binomial distribution, the probability of seeing $k$ successes in $n$ trials where the probability of success in each trial is $p$ (and $q = 1-p$) is given by $$P(X=k) = ({}_n C_k) p^k q^{n-k}$$ we can find the expected value and the variance of this probability distribution much more quickly if we appeal to the following properties:

$$E(X + Y) = E(X) + E(Y) \quad \quad \textrm{and} \quad \quad Var(X + Y) = Var(X) + Var(Y)$$

For a random variable $X$ that follows a binomial distribution associated with $n$ trials, probability of success $p$, and probability of failure $q$, let $X_t$ be the random variable that gives the number of successess seen in a single trial (i.e., either $0$ or $1$).

The distribution for $X_t$ is simple in the extreme: $$\begin{array}{c|c|c} x & 0 & 1\\\hline P(X_t = x) & q & p \end{array}$$

We quickly see that

$$E(X_t) = (0)(q) + (1)(p) = p$$ and $$\begin{array}{rcl} Var(X_t) &=& \left[(0^2)(q) + (1^2)(p)\right] - p^2\\ &=& p - p^2\\ &=& p(1-p)\\ &=& pq \end{array}$$

Now, returning to the expected value of the original random variable $X$ that follows a binomial distribution, note that

$$\begin{array}{rcl} E(X) &=& \underbrace{E(X_t) + E(X_t) + \cdots E(X_t)}_{\textrm{n terms}}\\ &=& n E(X_t)\\ &=& np \end{array}$$

Finding the variance of $X$ is just as immediate:

$$\begin{array}{rcl} Var(X) &=& \underbrace{Var(X_t) + Var(X_t) + \cdots Var(X_t)}_{\textrm{n terms}}\\ &=& n Var(X_t)\\ &=& npq \end{array}$$

This, of course, immediately gives the standard deviation of $X$: $$SD(X) = \sqrt{Var(X)} = \sqrt{npq}$$

Somehow, the above justifications seem to fit the simplicity of the results best, don't you think?