FedFast: Going Beyond Average for Faster Training of Federated Recommender Systems
|Title:||FedFast: Going Beyond Average for Faster Training of Federated Recommender Systems||Authors:||Muhammad, Khalil; Wang, Qinqin; O'Reilly-Morgan, Diarmuid; Tragos, Elias; Smyth, Barry; Hurley, Neil J.; Geraci, James; Lawlor, Aonghus||Permanent link:||http://hdl.handle.net/10197/12120||Date:||27-Aug-2020||Online since:||2021-04-22T15:51:32Z||Abstract:||Federated learning (FL) is quickly becoming the de facto standard for the distributed training of deep recommendation models, us-ing on-device user data and reducing server costs. In a typical FLprocess, a central server tasks end-users to train a shared recommen-dation model using their local data. The local models are trained over several rounds on the users’ devices and the server combinesthem into a global model, which is sent to the devices for the pur-pose of providing recommendations. Standard FL approaches userandomly selected users for training at each round, and simply average their local models to compute the global model. The resulting federated recommendation models require significant client effortto train and many communication rounds before they converge to asatisfactory accuracy. Users are left with poor quality recommendations until the late stages of training. We present a novel technique, FedFast, to accelerate distributed learning which achieves goodaccuracy for all users very early in the training process. We achievethis by sampling from a diverse set of participating clients in each training round and applying an active aggregation method that propagates the updated model to the other clients. Consequently, with FedFast the users benefit from far lower communication costsand more accurate models that can be consumed anytime during the training process even at the very early stages. We demonstrate the efficacy of our approach across a variety of benchmark datasetsand in comparison to state-of-the-art recommendation techniques||Funding Details:||Science Foundation Ireland||Funding Details:||Insight Research Centre
|Type of material:||Conference Publication||Publisher:||ACM||Copyright (published version):||2020 the Authors||Keywords:||Recommender systems; Federated learning; Active sampling; Faster training; Communication costs||DOI:||10.1145/3394486.3403176||Language:||en||Status of Item:||Peer reviewed||Is part of:||KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining||Conference Details:||The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’20), San Diego, California (held online due to coronavirus outbreak), 23-27th August 2020||This item is made available under a Creative Commons License:||https://creativecommons.org/licenses/by-nc-nd/3.0/ie/|
|Appears in Collections:||Insight Research Collection|
Show full item record
If you are a publisher or author and have copyright concerns for any item, please email email@example.com and the item will be withdrawn immediately. The author or person responsible for depositing the article will be contacted within one business day.