AutoS2AE: Automate to Regularize Sparse Shallow Autoencoders for Recommendation.
Published in The ACM Web Conference 2023 (WWW2023), Pages 1032–1042, 2023
The Embarrassingly Shallow Autoencoders (EASE and SLIM) are strong recommendation methods based on implicit feedback, compared to competing methods like iALS and VAE-CF. However, EASE suffers from several major shortcomings. First, the training and inference of EASE can not scale with the increasing number of items since it requires storing and inverting a large dense matrix; Second, though its optimization objective – the square loss– can yield a closed-form solution, it is not consistent with recommendation goal – predicting a personalized ranking on a set of items, so that its performance is far from optimal w.r.t ranking-oriented recommendation metrics. Finally, the regularization coefficients are sensitive w.r.t recommendation accuracy and vary a lot across different datasets, so the fine-tuning of these parameters is important yet time-consuming. To improve training and inference efficiency, we propose a Similarity-Structure Aware Shallow Autoencoder on top of three similarity structures, including Co-Occurrence, KNN and NSW. We then optimize the model with a weighted square loss, which is proven effective for ranking-based recommendation but still capable of deriving closed-form solutions. However, the weight in the loss can not be learned in the training set and is similarly sensitive w.r.t the accuracy to regularization coefficients. To automatically tune the hyperparameters, we design two validation losses on the validation set for guidance, and update the hyperparameters with the gradient of the validation losses. We finally evaluate the proposed method on multiple real-world datasets and show that it outperforms seven competing baselines remarkably, and verify the effectiveness of each part in the proposed method.
Recommended citation: Rui Fan, Yuanhao Pu, Jin Chen, Zhihao Zhu, Defu Lian* and Enhong Chen. AutoS2AE: Automate to Regularize Sparse Shallow Autoencoders for Recommendation. The 32nd Web Conference (WWW 2023), pp. 1032-1042, Apr. 2023.