NAIS¶
- Reference:
Xiangnan He et al. “NAIS: Neural Attentive Item Similarity Model for Recommendation.” in TKDE 2018.
- Reference code:
https://github.com/AaronHeee/Neural-Attentive-Item-Similarity-Model
- class recbole.model.general_recommender.nais.NAIS(config, dataset)[source]¶
Bases:
GeneralRecommender
NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. We just implement the model following the original author with a pointwise training mode.
Note
instead of forming a minibatch as all training instances of a randomly sampled user which is mentioned in the original paper, we still train the model by a randomly sampled interactions.
- attention_mlp(inter, target)[source]¶
layers of attention which support prod and concat
- Parameters:
inter (torch.Tensor) – the embedding of history items
target (torch.Tensor) – the embedding of target items
- Returns:
the result of attention
- Return type:
torch.Tensor
- calculate_loss(interaction)[source]¶
Calculate the training loss for a batch data.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Training loss, shape: []
- Return type:
torch.Tensor
- forward(user, item)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- full_sort_predict(interaction)[source]¶
full sort prediction function. Given users, calculate the scores between users and all candidate items.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Predicted scores for given users and all candidate items, shape: [n_batch_users * n_candidate_items]
- Return type:
torch.Tensor
- get_history_info(dataset)[source]¶
get the user history interaction information
- Parameters:
dataset (DataSet) – train dataset
- Returns:
(history_item_matrix, history_lens, mask_mat)
- Return type:
tuple
- input_type = 1¶
- mask_softmax(similarity, logits, bias, item_num, batch_mask_mat)[source]¶
softmax the unmasked user history items and get the final output
- Parameters:
similarity (torch.Tensor) – the similarity between the history items and target items
logits (torch.Tensor) – the initial weights of the history items
item_num (torch.Tensor) – user history interaction lengths
bias (torch.Tensor) – bias
batch_mask_mat (torch.Tensor) – the mask of user history interactions
- Returns:
final output
- Return type:
torch.Tensor
- predict(interaction)[source]¶
Predict the scores between users and items.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Predicted scores for given users and items, shape: [batch_size]
- Return type:
torch.Tensor
- reg_loss()[source]¶
calculate the reg loss for embedding layers and mlp layers
- Returns:
reg loss
- Return type:
torch.Tensor
- softmax(similarity, logits, item_num, bias)[source]¶
softmax the user history features and get the final output
- Parameters:
similarity (torch.Tensor) – the similarity between the history items and target items
logits (torch.Tensor) – the initial weights of the history items
item_num (torch.Tensor) – user history interaction lengths
bias (torch.Tensor) – bias
- Returns:
final output
- Return type:
torch.Tensor
- training: bool¶
- user_forward(user_input, item_num, repeats=None, pred_slc=None)[source]¶
forward the model by user
- Parameters:
user_input (torch.Tensor) – user input tensor
item_num (torch.Tensor) – user history interaction lens
repeats (int, optional) – the number of items to be evaluated
pred_slc (torch.Tensor, optional) – continuous index which controls the current evaluation items, if pred_slc is None, it will evaluate all items
- Returns:
result
- Return type:
torch.Tensor