The state equation for a linear time-invariant system:
$$ x’(t) = A x(t) + B u(t) $$
Can be solved for $x(t)$. We collect all terms in $x$:
$$ x’(t) - A x(t) = B u(t) $$
Multiply equation by $e^{-At}$
$$ x’(t) e^{-At} - A x(t) e^{-At} = B u(t) e^{-At} $$
Using product rule $d(f;g) = f;dg + g;df$, where:
To give:
$$ \frac{d}{dt} (e^{-At} x(t)) = B u(t) e^{-At} $$

In modern control approaches, systems are analyzed in time domain as a set of differential equations. Higher order differential equations are decomposed into sets of first order equations of state variables that represent the system internally. This produces three sets of variables:
Input variables are stimuli given to the system. Denoted by $u$. Output variables are the result of the current system state and inputs. Denoted by $y$. State variables represent the internal state of a system which may be obscured in the output variables.

A transfer function relates the output of a system to its input when it is represented in the Laplace domain. An assumption is made that initial steady-state response is 0. If $Y(s)$ is the output of a system, $X(s)$ is the input, then the transfer function is:
$$ H(s) = \frac{Y(s)}{X(s)} $$
Example - A Car A car as a system: The input is the acceleration. The output is the total distance travelled.

Classical control methods simplify handling of a complex system by representing it in a different domain. The equations governing system dynamics are transformed into a different set of variables. A for a function $f(t)$ in the $t$ domain, an oft used transformation is of the form:
$$ \mathcal{T}(f(t)) = F(s) = \int_{Domain} f(t) \cdot g(s, t); dt $$
Mathematically, the integral removes the $t$ variable and only leaves $s$, thus converting from the $t$ domain to the $s$ domain.

A primer for classical control theory.