layer_conv_3d_transpose
Transposed 3D convolution layer (sometimes called Deconvolution).
Description
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
Usage
layer_conv_3d_transpose(
object,
filters,
kernel_size, strides = c(1, 1, 1),
padding = "valid",
output_padding = NULL,
data_format = NULL,
dilation_rate = c(1L, 1L, 1L),
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
input_shape = NULL,
batch_input_shape = NULL,
batch_size = NULL,
dtype = NULL,
name = NULL,
trainable = NULL,
weights = NULL
)
Arguments
Arguments | Description |
---|---|
object | What to compose the new Layer instance with. Typically a Sequential model or a Tensor (e.g., as returned by layer_input() ). The return value depends on object . If object is: - missing or NULL , the Layer instance is returned. - a Sequential model, the model with an additional layer is returned. - a Tensor, the output tensor from layer_instance(object) is returned. |
filters | Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). |
kernel_size | An integer or list of 3 integers, specifying the depth, height, and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions. |
strides | An integer or list of 3 integers, specifying the strides of the convolution along the depth, height and width.. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. |
padding | one of "valid" or "same" (case-insensitive). |
output_padding | An integer or list of 3 integers, specifying the amount of padding along the depth, height, and width of the output tensor. Can be a single integer to specify the same value for all spatial dimensions. The amount of output padding along a given dimension must be lower than the stride along that same dimension. If set to NULL (default), the output shape is inferred. |
data_format | A string, one of channels_last (default) or channels_first . The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch, depth, height, width, channels) while channels_first corresponds to inputs with shape (batch, channels, depth, height, width) . It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json . If you never set it, then it will be “channels_last”. |
dilation_rate | An integer or vector of 3 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. |
activation | Activation function to use. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x ). |
use_bias | Boolean, whether the layer uses a bias vector. |
kernel_initializer | Initializer for the kernel weights matrix. |
bias_initializer | Initializer for the bias vector. |
kernel_regularizer | Regularizer function applied to the kernel weights matrix, |
bias_regularizer | Regularizer function applied to the bias vector. |
activity_regularizer | Regularizer function applied to the output of the layer (its “activation”). |
kernel_constraint | Constraint function applied to the kernel matrix. |
bias_constraint | Constraint function applied to the bias vector. |
input_shape | Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model. |
batch_input_shape | Shapes, including the batch size. For instance, batch_input_shape=c(10, 32) indicates that the expected input will be batches of 10 32-dimensional vectors. batch_input_shape=list(NULL, 32) indicates batches of an arbitrary number of 32-dimensional vectors. |
batch_size | Fixed batch size for layer |
dtype | The data type expected by the input, as a string (float32 , float64 , int32 …) |
name | An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn’t provided. |
trainable | Whether the layer weights will be updated during training. |
weights | Initial weights for layer. |
Details
When using this layer as the first layer in a model, provide the keyword argument input_shape
(list of integers, does not include the sample axis), e.g. input_shape = list(128, 128, 128, 3)
for a 128x128x128 volume with 3 channels if data_format="channels_last"
.
Section
References
See Also
Other convolutional layers: layer_conv_1d_transpose()
, layer_conv_1d()
, layer_conv_2d_transpose()
, layer_conv_2d()
, layer_conv_3d()
, layer_conv_lstm_2d()
, layer_cropping_1d()
, layer_cropping_2d()
, layer_cropping_3d()
, layer_depthwise_conv_1d()
, layer_depthwise_conv_2d()
, layer_separable_conv_1d()
, layer_separable_conv_2d()
, layer_upsampling_1d()
, layer_upsampling_2d()
, layer_upsampling_3d()
, layer_zero_padding_1d()
, layer_zero_padding_2d()
, layer_zero_padding_3d()