V2N Service Scaling with Deep Reinforcement Learning

Open Access
Authors
Publication date 2023
Host editors
  • K. Akkaya
  • O. Festor
  • C. Fung
  • M.A. Rahman
  • L. Zambenedetti Granville
  • C.R. Paula dos Santos
Book title Proceedings of IEEE/IFIP Network Operations and Management Symposium 2023
Book subtitle 8-12 May 2023 : NOMS
ISBN
  • 9781665477178
ISBN (electronic)
  • 9781665477161
Event 36th IEEE/IFIP Network Operations and Management Symposium, NOMS 2023
Pages (from-to) 725-729
Publisher Piscataway, NJ: IEEE
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract

The fifth generation (5G) of wireless networks is set out to meet the stringent requirements of vehicular use cases. Edge computing resources can aid in this direction by moving processing closer to end-users, reducing latency. However, given the stochastic nature of traffic loads and availability of physical resources, appropriate auto-scaling mechanisms need to be employed to support cost-efficient and performant services. To this end, we employ Deep Reinforcement Learning (DRL) for vertical scaling in Edge computing to support vehicular-to-network communications. We address the problem using Deep Deterministic Policy Gradient (DDPG). As DDPG is a model-free off-policy algorithm for learning continuous actions, we introduce a discretization approach to support discrete scaling actions. Thus we address scalability problems inherent to high-dimensional discrete action spaces. Employing a real-world vehicular trace data set, we show that DDPG outperforms existing solutions, reducing (at minimum) the average number of active CPUs by 23% while increasing the long-term reward by 24%.

Document type Conference contribution
Language English
Related dataset Dataset for Elastic Resource Scaling
Published at https://doi.org/10.1109/NOMS56928.2023.10154358
Other links https://www.proceedings.com/69518.html https://www.scopus.com/pages/publications/85162766363
Downloads
Permalink to this page
Back