Module keras.layers.normalization.batch_normalization_v1
Batch Normalization V1 layer.
Expand source code
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Batch Normalization V1 layer."""
# pylint: disable=g-classes-have-attributes
from keras.layers.normalization import batch_normalization
from tensorflow.python.util.tf_export import keras_export
# pylint: disable=missing-docstring
@keras_export(v1=['keras.layers.BatchNormalization'])
class BatchNormalization(batch_normalization.BatchNormalizationBase):
_USE_V2_BEHAVIOR = False
Classes
class BatchNormalization (axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None, trainable=True, virtual_batch_size=None, adjustment=None, name=None, **kwargs)
-
Layer that normalizes its inputs.
Batch normalization applies a transformation that maintains the mean output close to 0 and the output standard deviation close to 1.
Importantly, batch normalization works differently during training and during inference.
During training (i.e. when using
fit()
or when calling the layer/model with the argumenttraining=True
), the layer normalizes its output using the mean and standard deviation of the current batch of inputs. That is to say, for each channel being normalized, the layer returnsgamma * (batch - mean(batch)) / sqrt(var(batch) + epsilon) + beta
, where:epsilon
is small constant (configurable as part of the constructor arguments)gamma
is a learned scaling factor (initialized as 1), which can be disabled by passingscale=False
to the constructor.beta
is a learned offset factor (initialized as 0), which can be disabled by passingcenter=False
to the constructor.
During inference (i.e. when using
evaluate()
orpredict()
) or when calling the layer/model with the argumenttraining=False
(which is the default), the layer normalizes its output using a moving average of the mean and standard deviation of the batches it has seen during training. That is to say, it returnsgamma * (batch - self.moving_mean) / sqrt(self.moving_var + epsilon) + beta
.self.moving_mean
andself.moving_var
are non-trainable variables that are updated each time the layer in called in training mode, as such:moving_mean = moving_mean * momentum + mean(batch) * (1 - momentum)
moving_var = moving_var * momentum + var(batch) * (1 - momentum)
As such, the layer will only normalize its inputs during inference after having been trained on data that has similar statistics as the inference data.
Args
axis
- Integer or a list of integers, the axis that should be normalized
(typically the features axis). For instance, after a
Conv2D
layer withdata_format="channels_first"
, setaxis=1
inBatchNormalization
. momentum
- Momentum for the moving average.
epsilon
- Small float added to variance to avoid dividing by zero.
center
- If True, add offset of
beta
to normalized tensor. If False,beta
is ignored. scale
- If True, multiply by
gamma
. If False,gamma
is not used. When the next layer is linear (also e.g.nn.relu
), this can be disabled since the scaling will be done by the next layer. beta_initializer
- Initializer for the beta weight.
gamma_initializer
- Initializer for the gamma weight.
moving_mean_initializer
- Initializer for the moving mean.
moving_variance_initializer
- Initializer for the moving variance.
beta_regularizer
- Optional regularizer for the beta weight.
gamma_regularizer
- Optional regularizer for the gamma weight.
beta_constraint
- Optional constraint for the beta weight.
gamma_constraint
- Optional constraint for the gamma weight.
renorm
- Whether to use Batch Renormalization. This adds extra variables during training. The inference is the same for either value of this parameter.
renorm_clipping
- A dictionary that may map keys 'rmax', 'rmin', 'dmax' to
scalar
Tensors
used to clip the renorm correction. The correction(r, d)<code> is used as </code>corrected_value = normalized_value * r + d<code>, with </code>r
clipped to [rmin, rmax], andd
to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively. renorm_momentum
- Momentum used to update the moving means and standard
deviations with renorm. Unlike
momentum
, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note thatmomentum
is still applied to get the means and variances for inference. fused
- if
True
, use a faster, fused implementation, or raise a ValueError if the fused implementation cannot be used. IfNone
, use the faster implementation if possible. If False, do not used the fused implementation. Note that in TensorFlow 1.x, the meaning offused=True
is different: ifFalse
, the layer uses the system-recommended implementation. trainable
- Boolean, if
True
the variables will be marked as trainable. virtual_batch_size
- An
int
. By default,virtual_batch_size
isNone
, which means batch normalization is performed across the whole batch. Whenvirtual_batch_size
is notNone
, instead perform "Ghost Batch Normalization", which creates virtual sub-batches which are each normalized separately (with shared gamma, beta, and moving statistics). Must divide the actual batch size during execution. adjustment
- A function taking the
Tensor
containing the (dynamic) shape of the input tensor and returning a pair (scale, bias) to apply to the normalized values (before gamma and beta), only during training. For example, ifaxis=-1
,adjustment = lambda shape: ( tf.random.uniform(shape[-1:], 0.93, 1.07), tf.random.uniform(shape[-1:], -0.1, 0.1))
will scale the normalized value by up to 7% up or down, then shift the result by up to 0.1 (with independent scaling and bias for each feature but shared across all examples), and finally apply gamma and/or beta. IfNone
, no adjustment is applied. Cannot be specified if virtual_batch_size is specified.
Call arguments: inputs: Input tensor (of any rank). training: Python boolean indicating whether the layer should behave in training mode or in inference mode. -
training=True
: The layer will normalize its inputs using the mean and variance of the current batch of inputs. -training=False
: The layer will normalize its inputs using the mean and variance of its moving statistics, learned during training.Input shape: Arbitrary. Use the keyword argument
input_shape
(tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.Output shape: Same shape as input.
Reference
Expand source code
class BatchNormalization(batch_normalization.BatchNormalizationBase): _USE_V2_BEHAVIOR = False
Ancestors
- BatchNormalizationBase
- Layer
- tensorflow.python.module.module.Module
- tensorflow.python.training.tracking.tracking.AutoTrackable
- tensorflow.python.training.tracking.base.Trackable
- LayerVersionSelector
Subclasses
Inherited members
BatchNormalizationBase
:activity_regularizer
add_loss
add_metric
add_update
add_variable
add_weight
apply
build
call
compute_dtype
compute_mask
compute_output_shape
compute_output_signature
count_params
dtype
dtype_policy
dynamic
finalize_state
from_config
get_config
get_input_at
get_input_mask_at
get_input_shape_at
get_losses_for
get_output_at
get_output_mask_at
get_output_shape_at
get_updates_for
get_weights
inbound_nodes
input
input_mask
input_shape
input_spec
losses
metrics
name
non_trainable_variables
non_trainable_weights
outbound_nodes
output
output_mask
output_shape
set_weights
supports_masking
trainable_variables
trainable_weights
variable_dtype
variables
weights