RecBole Hyper-parameters | Knowledge Recommenders

Hyper-parameter search results of knowledge recommenders.


Hyper-parameter Search Results (knowledge)

Dataset informations: MovieLens-1m, Amazon-Books, Lastfm-track

Notes: The hyper-parameter search range in the table is for reference only. You can adjust the search range according to the actual situation of the dataset, such as reducing the range appropriately on large data to reduce time consumption. The orange bold text in the table represents the recommended value within the hyper-parameter search range. And the symbol "\" indicates that the model appears out of memory when running on a GPU with 12G memory or the running time is too long to complete the hyper-parameter search.

Model MovieLens-1m Amazon-Books Lastfm-track
CFKG learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
loss_function in ['inner_product', 'transe']
margin in [0.5,1.0,2.0]
learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
loss_function in ['inner_product', 'transe']
margin in [0.5,1.0,2.0]
learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
loss_function in ['inner_product', 'transe']
margin in [0.5,1.0,2.0]
CKE learning_rate in [5e-5,1e-4,5e-4,7e-4,1e-3]
kg_embedding_size in [16,32,64,128]
reg_weights in [[0.1,0.1],[0.01,0.01],[0.001,0.001]]
learning_rate in [5e-5,1e-4,5e-4,7e-4,1e-3]
kg_embedding_size in [16,32,64,128]
reg_weights in [[0.1,0.1],[0.01,0.01],[0.001,0.001]]
learning_rate in [5e-5,1e-4,5e-4,7e-4,1e-3]
kg_embedding_size in [16,32,64,128]
reg_weights in [[0.1,0.1],[0.01,0.01],[0.001,0.001]]
KGAT learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
layers in ['[64,32,16]','[64,64,64]','[128,64,32]']
reg_weight in [1e-4,5e-5,1e-5,5e-6,1e-6]
mess_dropout in [0.1,0.2,0.3,0.4,0.5]]
learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
layers in ['[64,32,16]','[64,64,64]','[128,64,32]']
reg_weight in [1e-4,5e-5,1e-5,5e-6,1e-6]
mess_dropout in [0.1,0.2,0.3,0.4,0.5]]
learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
layers in ['[64,32,16]','[64,64,64]','[128,64,32]']
reg_weight in [1e-4,5e-5,1e-5,5e-6,1e-6]
mess_dropout in [0.1,0.2,0.3,0.4,0.5]]
KGCN earning_rate in [0.002,0.001,0.0005]
n_iter in [1,2]
aggregator in ['sum','concat','neighbor']
l2_weight in [1e-3,1e-5,1e-7]
neighbor_sample_size in [4]
learning_rate in [0.002,0.001,0.0005]
n_iter in [1,2]
aggregator in ['sum','c oncat','neighbor']
l2_weight in [1e-3,1e-5,1e-7]
neighbor_sample_size in [4]
learning_rate in [0.002,0.001,0.0005]
n_iter in [1,2]
aggregator in ['sum','concat','neighbor']
l2_weight in [1e-3,1e-5,1e-7]
neighbor_sample_size in [4]
KGIN learning_rate in [1e-4,1e-3,5e-3]
node_dropout_rate in [0.1,0.3,0.5]
mess_dropout_rate in [0.0,0.1]
context_hops in [2,3]
n_factors in [4,8]
ind in ['cosine','distance']
learning_rate in [1e-4,1e-3,5e-3]
node_dropout_rate in [0.1,0.3,0.5]
mess_dropout_rate in [0.0,0.1]
context_hops in [2,3]
n_factors in [4,8]
ind in ['cosine','distance']
learning_rate in [1e-4,1e-3]
node_dropout_rate in [0.1,0.3,0.5]
mess_dropout_rate in [0.1]
context_hops in [3]
n_factors in [8]
ind in ['cosine','distance']
KGNNLS learning_rate in [0.002,0.001,0.0005]
n_iter in [1,2]
aggregator in ['sum']
l2_weight in [1e-3,1e-5]
neighbor_sample_size in [4]
ls_weight in [1,0.5,0.1,0.01,0.001]
learning_rate in [0.002,0.001,0.0005]
n_iter in [1,2]
aggregator in ['sum']
l2_weight in [1e-3,1e-5]
neighbor_sample_size in [4]
ls_weight in [1,0.5,0.1,0.01,0.001]
learning_rate in [0.002,0.001,0.0005]
n_iter in [1,2]
aggregator in ['sum']
l2_weight in [1e-3,1e-5]
neighbor_sample_size in [4]
ls_weight in [1,0.5,0.1,0.01,0.001]
KTUP learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
L1_flag in [True, False]
use_st_gumbel in [True, False]
train_rec_step in [8,10]
train_kg_step in [0,1,2,3,4,5]
learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
L1_flag in [True, False]
use_st_gumbel in [True, False]
train_rec_step in [8,10]
train_kg_step in [0,1,2,3,4,5]
learning_rate in [1e-2,5e-3,1e-3,5e-4,1e-4]
L1_flag in [True, False]
use_st_gumbel in [True, False]
train_rec_step in [8,10]
train_kg_step in [0,1,2,3,4,5]
MCCLK learning_rate in [1e-4,1e-3,5e-3]
node_dropout_rate in [0.1,0.3,0.5]
mess_dropout_rate in [0.0,0.1]
build_graph_separately in [True, False]
loss_type in ['BPR']
learning_rate in [1e-4,1e-3,5e-3]
node_dropout_rate in [0.1,0.3,0.5]
mess_dropout_rate in [0.0,0.1]
build_graph_separately in [True, False]
loss_type in ['BPR']
\
MKR learning_rate in [5e-5,1e-4,1e-3,5e-3,1e-2]
low_layers_num in [1,2,3]
high_layers_num in [1,2]
l2_weight in [1e-6,1e-4]
kg_embedding_size in [16,32,64]
learning_rate in [5e-5,1e-4,1e-3,5e-3,1e-2]
low_layers_num in [1,2,3]
high_layers_num in [1,2]
l2_weight in [1e-6,1e-4]
kg_embedding_size in [16,32,64]
learning_rate in [5e-5,1e-4,1e-3,5e-3,1e-2]
low_layers_num in [1,2,3]
high_layers_num in [1,2]
l2_weight in [1e-6,1e-4]
kg_embedding_size in [16,32,64]
RippleNet learning_rate in [0.001, 0.005, 0.01, 0.05]
n_memory in [4, 8,16,32]
training_neg_sample_num in [1, 2, 5, 10]
learning_rate in [0.001, 0.005, 0.01, 0.05]
n_memory in [4, 8,16,32]
training_neg_sample_num in [1, 2, 5, 10]
learning_rate in [0.001, 0.005, 0.01, 0.05]
n_memory in [4, 8,16,32]
training_neg_sample_num in [1, 2, 5, 10]