Li, Donghe and Yang, Qingyu and Ma, Linyue and Wang, Yiran and Zhang, Yang and Liao, Xiao (2023) An electrical vehicle-assisted demand response management system: A reinforcement learning method. Frontiers in Energy Research, 10. ISSN 2296-598X
pubmed-zip/versions/1/package-entries/fenrg-10-1071948/fenrg-10-1071948.pdf - Published Version
Download (15MB)
Abstract
With the continuous progress of urbanization, determining the charging and discharging strategy for randomly parked electric vehicles to help the peak load shifting without affecting users’ travel is a key problem. This paper design a reinforcement learning-based method for the electric vehicle-assisted demand response management system. Specifically, we formalize the charging and discharging sequential decision problem of the parking lot into the Markov process, in which the state space is composed of the state of parking spaces, electric vehicles, and the total load. The charging and discharging decision of each parking space acts as the action space. The reward comprises the penalty term that guarantees the user’s travel and the sliding average value of the load representing peak load shifting. After that, we use a Deep Q-Network (DQN)-based reinforcement learning architecture to solve this problem. Finally, we conduct a comprehensive evaluation with real-world power usage data. The results show that our proposed method will reduce the peak load by 10% without affecting the travel plan of all electric vehicles. Compared with random charging and discharging scenarios, we have better performance in terms of state-of-charge (SoC) achievement rate and peak load shifting effect.
Item Type: | Article |
---|---|
Subjects: | Article Archives > Energy |
Depositing User: | Unnamed user with email support@articlearchives.org |
Date Deposited: | 01 May 2023 05:57 |
Last Modified: | 25 Jul 2024 07:33 |
URI: | http://archive.paparesearch.co.in/id/eprint/1186 |