NextItNet

Reference:

Fajie Yuan et al., “A Simple Convolutional Generative Network for Next Item Recommendation” in WSDM 2019.

Reference code:
class recbole.model.sequential_recommender.nextitnet.NextItNet(config, dataset)[source]

Bases: recbole.model.abstract_recommender.SequentialRecommender

The network architecture of the NextItNet model is formed of a stack of holed convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Also residual block structure is used to ease the optimization for much deeper networks.

Note

As paper said, for comparison purpose, we only predict the next one item in our evaluation, and then stop the generating process. Although the number of parameters in residual block (a) is less than it in residual block (b), the performance of b is better than a. So in our model, we use residual block (b). In addition, when dilations is not equal to 1, the training may be slow. To speed up the efficiency, please set the parameters “reproducibility” False.

calculate_loss(interaction)[source]

Calculate the training loss for a batch data.

Parameters

interaction (Interaction) – Interaction class of the batch.

Returns

Training loss, shape: []

Return type

torch.Tensor

forward(item_seq)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

full_sort_predict(interaction)[source]

full sort prediction function. Given users, calculate the scores between users and all candidate items.

Parameters

interaction (Interaction) – Interaction class of the batch.

Returns

Predicted scores for given users and all candidate items, shape: [n_batch_users * n_candidate_items]

Return type

torch.Tensor

predict(interaction)[source]

Predict the scores between users and items.

Parameters

interaction (Interaction) – Interaction class of the batch.

Returns

Predicted scores for given users and items, shape: [batch_size]

Return type

torch.Tensor

reg_loss_rb()[source]

L2 loss on residual blocks

training: bool
class recbole.model.sequential_recommender.nextitnet.ResidualBlock_a(in_channel, out_channel, kernel_size=3, dilation=None)[source]

Bases: torch.nn.modules.module.Module

Residual block (a) in the paper

conv_pad(x, dilation)[source]

Dropout-mask: To avoid the future information leakage problem, this paper proposed a masking-based dropout trick for the 1D dilated convolution to prevent the network from seeing the future items. Also the One-dimensional transformation is completed in this function.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class recbole.model.sequential_recommender.nextitnet.ResidualBlock_b(in_channel, out_channel, kernel_size=3, dilation=None)[source]

Bases: torch.nn.modules.module.Module

Residual block (b) in the paper

conv_pad(x, dilation)[source]

Dropout-mask: To avoid the future information leakage problem, this paper proposed a masking-based dropout trick for the 1D dilated convolution to prevent the network from seeing the future items. Also the One-dimensional transformation is completed in this function.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool