# Linear Neural Networks
Before we get into the details of deep neural networks, we need to cover the basics of neural network training. In this chapter, we will cover the entire training process, including defining simple neural network architecures, handling data, specifying a loss function, and training the model. In order to make things easier to grasp, we begin with the simplest concepts. Fortunately, classic statistical learning techniques such as linear and logistic regression can be cast as *shallow* neural networks. Starting from these classic algorthms, we'll introduce you to the basics, providing the basis for more complex techniques such as softmax regression (introduced at the end of this chapter) and multilayer perceptrons (introduced in the next chapter).
```eval_rst
.. toctree::
:maxdepth: 2
linear-regression
linear-regression-scratch
linear-regression-gluon
softmax-regression
fashion-mnist
softmax-regression-scratch
softmax-regression-gluon
```