DiffRec¶
Introduction¶
Title: Diffusion Recommender Model
Authors: Wenjie Wang, Yiyan Xu, Fuli Feng, Xinyu Lin, Xiangnan He, Tat-Seng Chua
Abstract: Generative models such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) are widely utilized to model the generative process of user interactions. However, they suffer from intrinsic limitations such as the instability of GANs and the restricted representation ability of VAEs. Such limitations hinder the accurate modeling of the complex user interaction generation procedure, such as noisy interactions caused by various interference factors. In light of the impressive advantages of Diffusion Models (DMs) over traditional generative models in image synthesis, we propose a novel Diffusion Recommender Model (named DiffRec) to learn the generative process in a denoising manner. To retain personalized information in user interactions, DiffRec reduces the added noises and avoids corrupting users’ interactions into pure noises like in image synthesis. In addition, we extend traditional DMs to tackle the unique challenges in recommendation: high resource costs for large-scale item prediction and temporal shifts of user preference. To this end, we propose two extensions of DiffRec: L-DiffRec clusters items for dimension compression and conducts the diffusion processes in the latent space; and T-DiffRec reweights user interactions based on the interaction timestamps to encode temporal information. We conduct extensive experiments on three datasets under multiple settings (e.g., clean training, noisy training, and temporal training). The empirical results validate the superiority of DiffRec with two extensions over competitive baselines.
Running with RecBole¶
Model Hyper-Parameters:
noise_schedule (str)
: The schedule for noise generating: [‘linear’, ‘linear-var’, ‘cosine’, ‘binomial’]. Defaults to'linear'
.noise_scale (int)
: The scale for noise generating. Defaults to0.001
.noise_min (int)
: Noise lower bound for noise generating. Defaults to0.0005
.noise_max (int)
: 0.005 Noise upper bound for noise generating. Defaults to0.005
.sampling_noise (bool)
: Whether to use sampling noise. Defaults toFalse
.sampling_steps (int)
: Steps of the forward process during inference. Defaults to0
.reweight (bool)
: Assign different weight to different timestep or not. Defaults toTrue
.mean_type (str)
: MeanType for diffusion: [‘x0’, ‘eps’]. Defaults to'x0'
.steps (int)
: Diffusion steps. Defaults to5
.history_num_per_term (int)
: The number of history items needed to calculate loss weight. Defaults to10
.beta_fixed (bool)
: Whether to fix the variance of the first step to prevent overfitting. Defaults toTrue
.dims_dnn (list of int)
: The dims for the DNN. Defaults to[300]
.embedding_size (int)
: Timestep embedding size. Defaults to10
.mlp_act_func (str)
: Activation function for MLP. Defaults to'tanh'
.time-aware (bool)
: T-DiffRec or not. Defaults toFalse
.w_max (int)
: The upper bound of the time-aware interaction weight. Defaults to1
.w_min (int)
: The lower bound of the time-aware interaction weight. Defaults to0.1
.
A Running Example:
Write the following code to a python file, such as run.py
from recbole.quick_start import run_recbole
run_recbole(model='DiffRec', dataset='ml-100k')
And then:
python run.py
Notes:
w_max
andw_min
are unused whentime-aware
is False.
Tuning Hyper Parameters¶
If you want to use HyperTuning
to tune hyper parameters of this model, you can copy the following settings and name it as hyper.test
.
learning_rate choice [1e-3,1e-4,1e-5]
dims_dnn choice ['[300]','[200,600]','[1000]']
steps choice [2,5,10,50]
noice_scale choice [0,1e-5,1e-4,1e-3,1e-2,1e-1]
noice_min choice [5e-4,1e-3,5e-3]
noice_max choice [5e-3,1e-2]
w_min choice [0.1,0.2,0.3]
Note that we just provide these hyper parameter ranges for reference only, and we can not guarantee that they are the optimal range of this model.
Then, with the source code of RecBole (you can download it from GitHub), you can run the run_hyper.py
to tuning:
python run_hyper.py --model=[model_name] --dataset=[dataset_name] --config_files=[config_files_path] --params_file=hyper.test
For more details about Parameter Tuning, refer to Parameter Tuning.
If you want to change parameters, dataset or evaluation settings, take a look at