recbole.model.layers¶
Common Layers in recommender system
-
class
recbole.model.layers.
AttLayer
(in_dim, att_dim)[source]¶ Bases:
torch.nn.modules.module.Module
Calculate the attention signal(weight) according the input tensor.
- Parameters
infeatures (torch.FloatTensor) – A 3D input tensor with shape of[batch_size, M, embed_dim].
- Returns
Attention weight of input. shape of [batch_size, M].
- Return type
torch.FloatTensor
-
forward
(infeatures)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
recbole.model.layers.
BaseFactorizationMachine
(reduce_sum=True)[source]¶ Bases:
torch.nn.modules.module.Module
Calculate FM result over the embeddings
- Parameters
reduce_sum – bool, whether to sum the result, default is True.
- Input:
input_x: tensor, A 3D tensor with shape:
(batch_size,field_size,embed_dim)
.- Output
output: tensor, A 3D tensor with shape:
(batch_size,1)
or(batch_size, embed_dim)
.
-
forward
(input_x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
recbole.model.layers.
BiGNNLayer
(in_dim, out_dim)[source]¶ Bases:
torch.nn.modules.module.Module
Propagate a layer of Bi-interaction GNN
\[output = (L+I)EW_1 + LE \otimes EW_2\]-
forward
(lap_matrix, eye_matrix, features)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
recbole.model.layers.
CNNLayers
(channels, kernels, strides, activation='relu', init_method=None)[source]¶ Bases:
torch.nn.modules.module.Module
- Parameters
channels (-) – a list contains the channels of each layer in cnn layers
kernel (-) – a list contains the kernels of each layer in cnn layers
strides (-) – a list contains the channels of each layer in cnn layers
activation (-) – activation function after each layer in mlp layers. Default: ‘relu’ candidates: ‘sigmoid’, ‘tanh’, ‘relu’, ‘leekyrelu’, ‘none’
- Shape:
Input: \((N, C_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\]
Examples:
>>> m = CNNLayers([1, 32, 32], [2,2], [2,2], 'relu') >>> input = torch.randn(128, 1, 64, 64) >>> output = m(input) >>> print(output.size()) >>> torch.Size([128, 32, 16, 16])
-
forward
(input_feature)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
recbole.model.layers.
ContextSeqEmbAbstractLayer
[source]¶ Bases:
torch.nn.modules.module.Module
For Deep Interest Network and feature-rich sequential recommender systems, return features embedding matrices.
-
embed_float_fields
(float_fields, type, embed=True)[source]¶ Get the embedding of float fields. In the following three functions(“embed_float_fields” “embed_token_fields” “embed_token_seq_fields”) when the type is user, [batch_size, max_item_length] should be recognised as [batch_size]
- Parameters
float_fields (torch.Tensor) – [batch_size, max_item_length, num_float_field]
type (str) – user or item
embed (bool) – embed or not
- Returns
float fields embedding. [batch_size, max_item_length, num_float_field, embed_dim]
- Return type
torch.Tensor
-
embed_input_fields
(user_idx, item_idx)[source]¶ Get the embedding of user_idx and item_idx
- Parameters
user_idx (torch.Tensor) – interaction[‘user_id’]
item_idx (torch.Tensor) – interaction[‘item_id_list’]
- Returns
embedding of user feature and item feature
- Return type
dict
-
embed_token_fields
(token_fields, type)[source]¶ Get the embedding of toekn fields
- Parameters
token_fields (torch.Tensor) – input, [batch_size, max_item_length, num_token_field]
type (str) – user or item
- Returns
token fields embedding, [batch_size, max_item_length, num_token_field, embed_dim]
- Return type
torch.Tensor
-
embed_token_seq_fields
(token_seq_fields, type)[source]¶ Get the embedding of token_seq fields.
- Parameters
token_seq_fields (torch.Tensor) – input, [batch_size, max_item_length, seq_len]`
type (str) – user or item
mode (str) – mean/max/sum
- Returns
result [batch_size, max_item_length, num_token_seq_field, embed_dim]
- Return type
torch.Tensor
-
forward
(user_idx, item_idx)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
recbole.model.layers.
ContextSeqEmbLayer
(dataset, embedding_size, pooling_mode, device)[source]¶ Bases:
recbole.model.layers.ContextSeqEmbAbstractLayer
For Deep Interest Network, return all features (including user features and item features) embedding matrices.
-
class
recbole.model.layers.
Dice
(emb_size)[source]¶ Bases:
torch.nn.modules.module.Module
Dice activation function
\[f(s)=p(s) \cdot s+(1-p(s)) \cdot \alpha s\]\[p(s)=\frac{1} {1 + e^{-\frac{s-E[s]} {\sqrt {Var[s] + \epsilon}}}}\]-
forward
(score)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
recbole.model.layers.
FMEmbedding
(field_dims, offsets, embed_dim)[source]¶ Bases:
torch.nn.modules.module.Module
Embedding for token fields.
- Parameters
field_dims – list, the number of tokens in each token fields
offsets – list, the dimension offset of each token field
embed_dim – int, the dimension of output embedding vectors
- Input:
input_x: tensor, A 3D tensor with shape:
(batch_size,field_size)
.
- Returns
tensor, A 3D tensor with shape:
(batch_size,field_size,embed_dim)
.- Return type
output
-
forward
(input_x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
recbole.model.layers.
FMFirstOrderLinear
(config, dataset, output_dim=1)[source]¶ Bases:
torch.nn.modules.module.Module
Calculate the first order score of the input features. This class is a member of ContextRecommender, you can call it easily when inherit ContextRecommender.
-
embed_float_fields
(float_fields, embed=True)[source]¶ Calculate the first order score of float feature columns
- Parameters
float_fields (torch.FloatTensor) – The input tensor. shape of [batch_size, num_float_field]
- Returns
The first order score of float feature columns
- Return type
torch.FloatTensor
-
embed_token_fields
(token_fields)[source]¶ Calculate the first order score of token feature columns
- Parameters
token_fields (torch.LongTensor) – The input tensor. shape of [batch_size, num_token_field]
- Returns
The first order score of token feature columns
- Return type
torch.FloatTensor
-
embed_token_seq_fields
(token_seq_fields)[source]¶ Calculate the first order score of token sequence feature columns
- Parameters
token_seq_fields (torch.LongTensor) – The input tensor. shape of [batch_size, seq_len]
- Returns
The first order score of token sequence feature columns
- Return type
torch.FloatTensor
-
forward
(interaction)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
recbole.model.layers.
FeatureSeqEmbLayer
(dataset, embedding_size, selected_features, pooling_mode, device)[source]¶ Bases:
recbole.model.layers.ContextSeqEmbAbstractLayer
For feature-rich sequential recommenders, return item features embedding matrices according to selected features.
-
class
recbole.model.layers.
FeedForward
(hidden_size, inner_size, hidden_dropout_prob, hidden_act, layer_norm_eps)[source]¶ Bases:
torch.nn.modules.module.Module
Point-wise feed-forward layer is implemented by two dense layers.
- Parameters
input_tensor (torch.Tensor) – the input of the point-wise feed-forward layer
- Returns
the output of the point-wise feed-forward layer
- Return type
hidden_states (torch.Tensor)
-
forward
(input_tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
gelu
(x)[source]¶ Implementation of the gelu activation function.
For information: OpenAI GPT’s gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
Also see https://arxiv.org/abs/1606.08415
-
class
recbole.model.layers.
MLPLayers
(layers, dropout=0, activation='relu', bn=False, init_method=None)[source]¶ Bases:
torch.nn.modules.module.Module
- Parameters
layers (-) – a list contains the size of each layer in mlp layers
dropout (-) – probability of an element to be zeroed. Default: 0
activation (-) – activation function after each layer in mlp layers. Default: ‘relu’ candidates: ‘sigmoid’, ‘tanh’, ‘relu’, ‘leekyrelu’, ‘none’
Shape:
Input: (\(N\), *, \(H_{in}\)) where * means any number of additional dimensions \(H_{in}\) must equal to the first value in layers
Output: (\(N\), *, \(H_{out}\)) where \(H_{out}\) equals to the last value in layers
Examples:
>>> m = MLPLayers([64, 32, 16], 0.2, 'relu') >>> input = torch.randn(128, 64) >>> output = m(input) >>> print(output.size()) >>> torch.Size([128, 16])
-
forward
(input_feature)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
recbole.model.layers.
MultiHeadAttention
(n_heads, hidden_size, hidden_dropout_prob, attn_dropout_prob, layer_norm_eps)[source]¶ Bases:
torch.nn.modules.module.Module
Multi-head Self-attention layers, a attention score dropout layer is introduced.
- Parameters
input_tensor (torch.Tensor) – the input of the multi-head self-attention layer
attention_mask (torch.Tensor) – the attention mask for input tensor
- Returns
the output of the multi-head self-attention layer
- Return type
hidden_states (torch.Tensor)
-
forward
(input_tensor, attention_mask)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
recbole.model.layers.
SequenceAttLayer
(mask_mat, att_hidden_size=80, 40, activation='sigmoid', softmax_stag=False, return_seq_weight=True)[source]¶ Bases:
torch.nn.modules.module.Module
Attention Layer. Get the representation of each user in the batch.
- Parameters
queries (torch.Tensor) – candidate ads, [B, H], H means embedding_size * feat_num
keys (torch.Tensor) – user_hist, [B, T, H]
keys_length (torch.Tensor) – mask, [B]
- Returns
result
- Return type
torch.Tensor
-
forward
(queries, keys, keys_length)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
recbole.model.layers.
TransformerEncoder
(n_layers=2, n_heads=2, hidden_size=64, inner_size=256, hidden_dropout_prob=0.5, attn_dropout_prob=0.5, hidden_act='gelu', layer_norm_eps=1e-12)[source]¶ Bases:
torch.nn.modules.module.Module
One TransformerEncoder consists of several TransformerLayers.
n_layers(num): num of transformer layers in transformer encoder. Default: 2
n_heads(num): num of attention heads for multi-head attention layer. Default: 2
hidden_size(num): the input and output hidden size. Default: 64
inner_size(num): the dimensionality in feed-forward layer. Default: 256
hidden_dropout_prob(float): probability of an element to be zeroed. Default: 0.5
attn_dropout_prob(float): probability of an attention score to be zeroed. Default: 0.5
- hidden_act(str): activation function in feed-forward layer. Default: ‘gelu’
candidates: ‘gelu’, ‘relu’, ‘swish’, ‘tanh’, ‘sigmoid’
layer_norm_eps(float): a value added to the denominator for numerical stability. Default: 1e-12
-
forward
(hidden_states, attention_mask, output_all_encoded_layers=True)[source]¶ - Parameters
hidden_states (torch.Tensor) – the input of the TrandformerEncoder
attention_mask (torch.Tensor) – the attention mask for the input hidden_states
output_all_encoded_layers (Bool) – whether output all transformer layers’ output
- Returns
if output_all_encoded_layers is True, return a list consists of all transformer layers’ output, otherwise return a list only consists of the output of last transformer layer.
- Return type
all_encoder_layers (list)
-
class
recbole.model.layers.
TransformerLayer
(n_heads, hidden_size, intermediate_size, hidden_dropout_prob, attn_dropout_prob, hidden_act, layer_norm_eps)[source]¶ Bases:
torch.nn.modules.module.Module
One transformer layer consists of a multi-head self-attention layer and a point-wise feed-forward layer.
- Parameters
hidden_states (torch.Tensor) – the input of the multi-head self-attention sublayer
attention_mask (torch.Tensor) – the attention mask for the multi-head self-attention sublayer
- Returns
the output of the point-wise feed-forward sublayer, is the output of the transformer layer
- Return type
feedforward_output (torch.Tensor)
-
forward
(hidden_states, attention_mask)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
recbole.model.layers.
VanillaAttention
(hidden_dim, attn_dim)[source]¶ Bases:
torch.nn.modules.module.Module
Vanilla attention layer is implemented by linear layer.
- Parameters
input_tensor (torch.Tensor) – the input of the attention layer
- Returns
the outputs of the attention layer weights (torch.Tensor): the attention weights
- Return type
hidden_states (torch.Tensor)
-
forward
(input_tensor)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.