Enhancing the aggregate diversity with mutual trust computations for context-aware recommendations
Click here to view fulltext PDF
Permanent link:
https://www.ias.ac.in/article/fulltext/sadh/047/0029
Context-aware Recommender Systems (CARS) deal with modeling and prediction of user interests and preferences according to contextual information while generating a recommendation. In contextual modeling- based CARS, the context information is used straight into the recommendation function as a predictor explicitly. Thus, this approach formulates a multidimensional recommendation model and is best realized through Tensor Factorization (TF) based techniques. It efficiently handles the data sparsity problem faced bymost of the traditional RS. However, the recent TF-based CARS face issues such as differentiating amongst relevant and irrelevant context variables, biased recommendations, and long-tail problem. In this paper, we propose a fusion-based approach for determining the list of most relevant and optimum contexts for two datasets, namely the LDos Comoda and Travel dataset. The mutual trust model that combines user level and item level trust is proposed further which utilizes the concept of trust propagation to calculate the inferred trust betweenusers/items. Finally, a hybrid reranking technique combining the item popularity and item absolute likeability reranking approaches with the standard ranking technique of generating recommendations is proposed to generate diversified recommendations. Comparative experiments on the LDos Comoda and the Travel datasets are conducted and the experimental results show an improvement of the proposed work with respect to RMSE of 50%, 55%, and 59% compared to MF-based RS, trust-based RS, and context-aware RS respectively. Also, the proposed reranking technique shows approximately three times more diversified recommendations than the standard ranking approach without a significant loss in precision.
VANDANA PATIL1 DEEPAK JAYASWAL2
Volume 48, 2023
All articles
Continuous Article Publishing mode
Click here for Editorial Note on CAP Mode
© 2022-2023 Indian Academy of Sciences, Bengaluru.