A Distributed Asynchronous Deep Reinforcement Learning Framework for Recommender Systems

DC FieldValueLanguage
dc.contributor.authorShi, Bichen-
dc.contributor.authorTragos, Elias-
dc.contributor.authorOzsoy, Makbule Gulcin-
dc.contributor.authorDong, Ruihai-
dc.contributor.authorSmyth, Barry-
dc.contributor.authorHurley, Neil J.-
dc.contributor.authorLawlor, Aonghus-
dc.date.accessioned2021-05-27T11:57:43Z-
dc.date.available2021-05-27T11:57:43Z-
dc.date.copyright2018 Association for Computing Machineryen_US
dc.date.issued2020-09-26-
dc.identifier.urihttp://hdl.handle.net/10197/12220-
dc.descriptionThe 14th ACM Conference on Recommender Systems (RecSys 2020), Online, 22-26 September 2020en_US
dc.description.abstractIn this paper we propose DADRL, a distributed, asynchronous reinforcement learning recommender system based on the asynchronous advantage actor-critic model (A3C), which combines ideas from A3C and federated learning (FL). The proposed algorithm keeps the user preferences or interactions on local devices and uses a combination of on-device, local recommendation models and a complementary global model. The global model is trained only by the loss gradients of the local models, rather than directly using user preferences or interactions data. We demonstrate, using well-known datasets and benchmark algorithms, how this approach can deliver performance that is comparable with the current state-of-the-art while enhancing user privacy.en_US
dc.description.sponsorshipScience Foundation Irelanden_US
dc.language.isoenen_US
dc.publisherACMen_US
dc.rights© ACM, YYYY. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnnen_US
dc.subjectNovel personal sensingen_US
dc.subjectReinforcement learningen_US
dc.subjectRecommender systemsen_US
dc.subjectDistributed learningen_US
dc.titleA Distributed Asynchronous Deep Reinforcement Learning Framework for Recommender Systemsen_US
dc.typeConference Publicationen_US
dc.internal.webversionshttps://recsys.acm.org/recsys20/-
dc.statusPeer revieweden_US
dc.check.date2021-04-27-
dc.identifier.doi10.1145/3383313-
dc.neeo.contributorShi|Bichen|aut|-
dc.neeo.contributorTragos|Elias|aut|-
dc.neeo.contributorOzsoy|Makbule Gulcin|aut|-
dc.neeo.contributorDong|Ruihai|aut|-
dc.neeo.contributorSmyth|Barry|aut|-
dc.neeo.contributorHurley|Neil J.|aut|-
dc.neeo.contributorLawlor|Aonghus|aut|-
dc.description.othersponsorshipInsight Research Centreen_US
dc.description.admin2020-10-06 JG: PDF replaced with correct versionen_US
dc.description.adminCheck for published version during checkdate report - AC Can't locate version of record, replacing original DOI (10.1145/1122445.1122456) with cited conference's DOI. Original DOI doesn't seem to correspond with this paper. 2018 copyright date is suspect - JGen_US
dc.date.updated2020-09-03T12:09:16Z-
dc.identifier.grantidSFI/12/RC/2289_P2-
dc.rights.licensehttps://creativecommons.org/licenses/by-nc-nd/3.0/ie/en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:Insight Research Collection
Files in This Item:
 File SizeFormat
DownloadA Distributed Asynchronous Deep Reinforcement Learning Framework for Recommender Systems.pdf471.81 kBAdobe PDF
Show simple item record

Page view(s)

53
Last Week
4
Last month
21
checked on Jul 24, 2021

Download(s)

26
checked on Jul 24, 2021

Google ScholarTM

Check

Altmetric


If you are a publisher or author and have copyright concerns for any item, please email research.repository@ucd.ie and the item will be withdrawn immediately. The author or person responsible for depositing the article will be contacted within one business day.