BERT4Rec

Reference:

Fei Sun et al. “BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer.” In CIKM 2019.

Reference code:

The authors’ tensorflow implementation https://github.com/FeiSun/BERT4Rec

class recbole.model.sequential_recommender.bert4rec.BERT4Rec(config, dataset)[source]

Bases: recbole.model.abstract_recommender.SequentialRecommender

calculate_loss(interaction)[source]

Calculate the training loss for a batch data.

Parameters

interaction (Interaction) – Interaction class of the batch.

Returns

Training loss, shape: []

Return type

torch.Tensor

forward(item_seq)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

full_sort_predict(interaction)[source]

full sort prediction function. Given users, calculate the scores between users and all candidate items.

Parameters

interaction (Interaction) – Interaction class of the batch.

Returns

Predicted scores for given users and all candidate items, shape: [n_batch_users * n_candidate_items]

Return type

torch.Tensor

multi_hot_embed(masked_index, max_length)[source]

For memory, we only need calculate loss for masked position. Generate a multi-hot vector to indicate the masked position for masked sequence, and then is used for gathering the masked position hidden representation.

Examples

sequence: [1 2 3 4 5]

masked_sequence: [1 mask 3 mask 5]

masked_index: [1, 3]

max_length: 5

multi_hot_embed: [[0 1 0 0 0], [0 0 0 1 0]]

predict(interaction)[source]

Predict the scores between users and items.

Parameters

interaction (Interaction) – Interaction class of the batch.

Returns

Predicted scores for given users and items, shape: [batch_size]

Return type

torch.Tensor

reconstruct_test_data(item_seq, item_seq_len)[source]

Add mask token at the last position according to the lengths of item_seq

training: bool