SGL¶
- Reference:
Jiancan Wu et al. “SGL: Self-supervised Graph Learning for Recommendation” in SIGIR 2021.
- Reference code:
- class recbole.model.general_recommender.sgl.SGL(config, dataset)[source]¶
Bases:
GeneralRecommender
SGL is a GCN-based recommender model.
SGL supplements the classical supervised task of recommendation with an auxiliary self supervised task, which reinforces node representation learning via self- discrimination.Specifically,SGL generates multiple views of a node, maximizing the agreement between different views of the same node compared to that of other nodes. SGL devises three operators to generate the views — node dropout, edge dropout, and random walk — that change the graph structure in different manners.
We implement the model following the original author with a pairwise training mode.
- calc_bpr_loss(user_emd, item_emd, user_list, pos_item_list, neg_item_list)[source]¶
Calculate the the pairwise Bayesian Personalized Ranking (BPR) loss and parameter regularization loss.
- Parameters:
user_emd (torch.Tensor) – Ego embedding of all users after forwarding.
item_emd (torch.Tensor) – Ego embedding of all items after forwarding.
user_list (torch.Tensor) – List of the user.
pos_item_list (torch.Tensor) – List of positive examples.
neg_item_list (torch.Tensor) – List of negative examples.
- Returns:
Loss of BPR tasks and parameter regularization.
- Return type:
torch.Tensor
- calc_ssl_loss(user_list, pos_item_list, user_sub1, user_sub2, item_sub1, item_sub2)[source]¶
Calculate the loss of self-supervised tasks.
- Parameters:
user_list (torch.Tensor) – List of the user.
pos_item_list (torch.Tensor) – List of positive examples.
user_sub1 (torch.Tensor) – Ego embedding of all users in the first subgraph after forwarding.
user_sub2 (torch.Tensor) – Ego embedding of all users in the second subgraph after forwarding.
item_sub1 (torch.Tensor) – Ego embedding of all items in the first subgraph after forwarding.
item_sub2 (torch.Tensor) – Ego embedding of all items in the second subgraph after forwarding.
- Returns:
Loss of self-supervised tasks.
- Return type:
torch.Tensor
- calculate_loss(interaction)[source]¶
Calculate the training loss for a batch data.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Training loss, shape: []
- Return type:
torch.Tensor
- create_adjust_matrix(is_sub: bool)[source]¶
Get the normalized interaction matrix of users and items.
Construct the square matrix from the training data and normalize it using the laplace matrix.If it is a subgraph, it may be processed by node dropout or edge dropout.
- Returns:
csr_matrix of the normalized interaction matrix.
- csr2tensor(matrix: csr_matrix)[source]¶
Convert csr_matrix to tensor.
- Parameters:
matrix (scipy.csr_matrix) – Sparse matrix to be converted.
- Returns:
Transformed sparse matrix.
- Return type:
torch.sparse.FloatTensor
- forward(graph)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- full_sort_predict(interaction)[source]¶
full sort prediction function. Given users, calculate the scores between users and all candidate items.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Predicted scores for given users and all candidate items, shape: [n_batch_users * n_candidate_items]
- Return type:
torch.Tensor
- graph_construction()[source]¶
Devise three operators to generate the views — node dropout, edge dropout, and random walk of a node.
- input_type = 2¶
- predict(interaction)[source]¶
Predict the scores between users and items.
- Parameters:
interaction (Interaction) – Interaction class of the batch.
- Returns:
Predicted scores for given users and items, shape: [batch_size]
- Return type:
torch.Tensor
- rand_sample(high, size=None, replace=True)[source]¶
Randomly discard some points or edges.
- Parameters:
high (int) – Upper limit of index value
size (int) – Array size after sampling
- Returns:
Array index after sampling, shape: [size]
- Return type:
numpy.ndarray
- train(mode: bool = True)[source]¶
Override train method of base class.The subgraph is reconstructed each time it is called.
- training: bool¶