6.3. Parameter Initialization¶ Open the notebook in SageMaker Studio Lab
Now that we know how to access the parameters, let’s look at how to initialize them properly. We discussed the need for proper initialization in Section 5.4. The deep learning framework provides default random initializations to its layers. However, we often want to initialize our weights according to various other protocols. The framework provides most commonly used protocols, and also allows to create a custom initializer.
By default, PyTorch initializes weight and bias matrices uniformly by
drawing from a range that is computed according to the input and output
dimension. PyTorch’s nn.init
module provides a variety of preset
initialization methods.
import torch
from torch import nn
net = nn.Sequential(nn.LazyLinear(8), nn.ReLU(), nn.LazyLinear(1))
X = torch.rand(size=(2, 4))
net(X).shape
/home/d2l-worker/miniconda3/envs/d2l-en-release-1/lib/python3.8/site-packages/torch/nn/modules/lazy.py:178: UserWarning: Lazy modules are a new feature under heavy development so changes to the API or functionality can happen at any moment.
warnings.warn('Lazy modules are a new feature under heavy development '
torch.Size([2, 1])
By default, MXNet initializes weight parameters by randomly drawing from
a uniform distribution \(U(-0.07, 0.07)\), clearing bias parameters
to zero. MXNet’s init
module provides a variety of preset
initialization methods.
from mxnet import init, np, npx
from mxnet.gluon import nn
npx.set_np()
net = nn.Sequential()
net.add(nn.Dense(8, activation='relu'))
net.add(nn.Dense(1))
net.initialize() # Use the default initialization method
X = np.random.uniform(size=(2, 4))
net(X).shape
(2, 1)
By default, Keras initializes weight matrices uniformly by drawing from
a range that is computed according to the input and output dimension,
and the bias parameters are all set to zero. TensorFlow provides a
variety of initialization methods both in the root module and the
keras.initializers
module.
import tensorflow as tf
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(4, activation=tf.nn.relu),
tf.keras.layers.Dense(1),
])
X = tf.random.uniform((2, 4))
net(X).shape
TensorShape([2, 1])
6.3.1. Built-in Initialization¶
Let’s begin by calling on built-in initializers. The code below initializes all weight parameters as Gaussian random variables with standard deviation 0.01, while bias parameters cleared to zero.
def init_normal(module):
if type(module) == nn.LazyLinear:
nn.init.normal_(module.weight, mean=0, std=0.01)
nn.init.zeros_(module.bias)
net.apply(init_normal)
net[0].weight.data[0], net[0].bias.data[0]
(tensor([ 0.4370, 0.3091, 0.0148, -0.4400]), tensor(0.3136))
# Here `force_reinit` ensures that parameters are freshly initialized even if
# they were already initialized previously
net.initialize(init=init.Normal(sigma=0.01), force_reinit=True)
net[0].weight.data()[0]
array([ 0.00354961, -0.00614133, 0.0107317 , 0.01830765])
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4, activation=tf.nn.relu,
kernel_initializer=tf.random_normal_initializer(mean=0, stddev=0.01),
bias_initializer=tf.zeros_initializer()),
tf.keras.layers.Dense(1)])
net(X)
net.weights[0], net.weights[1]
(<tf.Variable 'dense_2/kernel:0' shape=(4, 4) dtype=float32, numpy=
array([[ 2.8511812e-03, 2.9146119e-03, 1.4064329e-02, 3.3702441e-03],
[-3.4635805e-03, 1.3232786e-02, 4.2781038e-03, -1.1785918e-02],
[ 1.2000235e-02, -2.8830252e-04, 1.4154162e-02, 1.4051654e-02],
[-6.8590972e-03, -1.8047828e-02, -1.8943503e-03, 9.3096343e-05]],
dtype=float32)>,
<tf.Variable 'dense_2/bias:0' shape=(4,) dtype=float32, numpy=array([0., 0., 0., 0.], dtype=float32)>)
We can also initialize all the parameters to a given constant value (say, 1).
def init_constant(module):
if type(module) == nn.LazyLinear:
nn.init.constant_(module.weight, 1)
nn.init.zeros_(module.bias)
net.apply(init_constant)
net[0].weight.data[0], net[0].bias.data[0]
(tensor([ 0.4370, 0.3091, 0.0148, -0.4400]), tensor(0.3136))
net.initialize(init=init.Constant(1), force_reinit=True)
net[0].weight.data()[0]
array([1., 1., 1., 1.])
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4, activation=tf.nn.relu,
kernel_initializer=tf.keras.initializers.Constant(1),
bias_initializer=tf.zeros_initializer()),
tf.keras.layers.Dense(1),
])
net(X)
net.weights[0], net.weights[1]
(<tf.Variable 'dense_4/kernel:0' shape=(4, 4) dtype=float32, numpy=
array([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], dtype=float32)>,
<tf.Variable 'dense_4/bias:0' shape=(4,) dtype=float32, numpy=array([0., 0., 0., 0.], dtype=float32)>)
We can also apply different initializers for certain blocks. For example, below we initialize the first layer with the Xavier initializer and initialize the second layer to a constant value of 42.
def init_xavier(module):
if type(module) == nn.LazyLinear:
nn.init.xavier_uniform_(module.weight)
def init_42(module):
if type(module) == nn.LazyLinear:
nn.init.constant_(module.weight, 42)
net[0].apply(init_xavier)
net[2].apply(init_42)
print(net[0].weight.data[0])
print(net[2].weight.data)
tensor([ 0.4370, 0.3091, 0.0148, -0.4400])
tensor([[-0.2752, 0.2223, -0.0252, 0.0438, -0.2980, 0.0697, -0.2945, 0.0434]])
net[0].weight.initialize(init=init.Xavier(), force_reinit=True)
net[1].initialize(init=init.Constant(42), force_reinit=True)
print(net[0].weight.data()[0])
print(net[1].weight.data())
[-0.26102373 0.15249556 -0.19274211 -0.24742058]
[[42. 42. 42. 42. 42. 42. 42. 42.]]
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4,
activation=tf.nn.relu,
kernel_initializer=tf.keras.initializers.GlorotUniform()),
tf.keras.layers.Dense(
1, kernel_initializer=tf.keras.initializers.Constant(42)),
])
net(X)
print(net.layers[1].weights[0])
print(net.layers[2].weights[0])
<tf.Variable 'dense_6/kernel:0' shape=(4, 4) dtype=float32, numpy=
array([[-0.2091589 , 0.51474994, 0.17790681, 0.10261679],
[ 0.8646433 , -0.12674725, 0.16356164, 0.39597303],
[-0.1087479 , 0.81650525, 0.09159321, -0.14826691],
[ 0.29513222, 0.5484083 , -0.23086452, 0.4310636 ]],
dtype=float32)>
<tf.Variable 'dense_7/kernel:0' shape=(4, 1) dtype=float32, numpy=
array([[42.],
[42.],
[42.],
[42.]], dtype=float32)>
6.3.1.1. Custom Initialization¶
Sometimes, the initialization methods we need are not provided by the deep learning framework. In the example below, we define an initializer for any weight parameter \(w\) using the following strange distribution:
Again, we implement a my_init
function to apply to net
.
def my_init(module):
if type(module) == nn.LazyLinear:
print("Init", *[(name, param.shape)
for name, param in module.named_parameters()][0])
nn.init.uniform_(module.weight, -10, 10)
module.weight.data *= module.weight.data.abs() >= 5
net.apply(my_init)
net[0].weight[:2]
tensor([[ 0.4370, 0.3091, 0.0148, -0.4400],
[ 0.3974, 0.1973, -0.1462, 0.2929]], grad_fn=<SliceBackward0>)
Here we define a subclass of the Initializer
class. Usually, we only
need to implement the _init_weight
function which takes a tensor
argument (data
) and assigns to it the desired initialized values.
class MyInit(init.Initializer):
def _init_weight(self, name, data):
print('Init', name, data.shape)
data[:] = np.random.uniform(-10, 10, data.shape)
data *= np.abs(data) >= 5
net.initialize(MyInit(), force_reinit=True)
net[0].weight.data()[:2]
Init dense0_weight (8, 4)
Init dense1_weight (1, 8)
array([[-6.0683527, 8.991421 , -0. , 0. ],
[ 6.4198647, -9.728567 , -8.057975 , 0. ]])
Here we define a subclass of Initializer
and implement the
__call__
function that return a desired tensor given the shape and
data type.
class MyInit(tf.keras.initializers.Initializer):
def __call__(self, shape, dtype=None):
data=tf.random.uniform(shape, -10, 10, dtype=dtype)
factor=(tf.abs(data) >= 5)
factor=tf.cast(factor, tf.float32)
return data * factor
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(
4,
activation=tf.nn.relu,
kernel_initializer=MyInit()),
tf.keras.layers.Dense(1),
])
net(X)
print(net.layers[1].weights[0])
<tf.Variable 'dense_8/kernel:0' shape=(4, 4) dtype=float32, numpy=
array([[-7.947881 , 0. , -0. , 8.292942 ],
[ 6.3311195, 9.636406 , 0. , -0. ],
[-9.0933275, 0. , -5.166726 , 7.3095818],
[-0. , 6.38093 , -6.967051 , 6.3882523]], dtype=float32)>
Note that we always have the option of setting parameters directly.
net[0].weight.data[:] += 1
net[0].weight.data[0, 0] = 42
net[0].weight.data[0]
tensor([42.0000, 1.3091, 1.0148, 0.5600])
net[0].weight.data()[:] += 1
net[0].weight.data()[0, 0] = 42
net[0].weight.data()[0]
array([42. , 9.991421, 1. , 1. ])
net.layers[1].weights[0][:].assign(net.layers[1].weights[0] + 1)
net.layers[1].weights[0][0, 0].assign(42)
net.layers[1].weights[0]
<tf.Variable 'dense_8/kernel:0' shape=(4, 4) dtype=float32, numpy=
array([[42. , 1. , 1. , 9.292942 ],
[ 7.3311195, 10.636406 , 1. , 1. ],
[-8.0933275, 1. , -4.166726 , 8.309582 ],
[ 1. , 7.38093 , -5.967051 , 7.3882523]], dtype=float32)>
6.3.2. Summary¶
We can initialize parameters using built-in and custom initializers.
6.3.3. Exercises¶
Look up the online documentation for more built-in initializers.