Local demand management of charging stations using vehicle-to-vehicle service: A welfare maximization-based soft actor-critic model
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F62690094%3A18470%2F23%3A50020746" target="_blank" >RIV/62690094:18470/23:50020746 - isvavai.cz</a>
Výsledek na webu
<a href="https://www.sciencedirect.com/science/article/abs/pii/S2590116823000553" target="_blank" >https://www.sciencedirect.com/science/article/abs/pii/S2590116823000553</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1016/j.etran.2023.100280" target="_blank" >10.1016/j.etran.2023.100280</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Local demand management of charging stations using vehicle-to-vehicle service: A welfare maximization-based soft actor-critic model
Popis výsledku v původním jazyce
Transportation electrification has the potential to reduce carbon emissions from the transport sector. However, the increased penetration of electric vehicles (EVs) can potentially overload the distribution systems. This becomes prominent in locations with multiple EV chargers and charging stations with many EVs. Therefore, this study proposes a welfare maximization-based soft actor critic (SAC) model to mitigate transformer overload in distribution systems due to the high penetration of EVs. The demand of each charging station is managed locally to avoid network overload during peak load hours in two steps. First, a welfare maximization-based optimization model is developed to maximize the welfare of electric vehicle owners by performing vehicle-to-vehicle(V2V) service. In this step, the sensitivity of EV owners to different parameters (energy level, battery degradation, and incentives provided by fleet operators) is considered. Then, a deep reinforcement learning-based method (soft-actor critic) is trained by incorporating the welfare value (obtained from the welfare maximization model) in the reward function. The total power demand (at the transformer level) and transformer capacity are also included in the reward function. The agent (fleet operator) learns the optimal pricing strategy for local demand management of EVs by interacting with the environment. Each electric vehicle responds to the action (price) by deciding the amount of power they are willing to charge/discharge (V2V) during that interval. Training is performed offline, and the trained model can be used for real-time demand management of different types of charging stations. The simulation results have shown that the proposed method can successfully manage the demand of different charging stations, via V2V, without violating the transformer capacity limits.
Název v anglickém jazyce
Local demand management of charging stations using vehicle-to-vehicle service: A welfare maximization-based soft actor-critic model
Popis výsledku anglicky
Transportation electrification has the potential to reduce carbon emissions from the transport sector. However, the increased penetration of electric vehicles (EVs) can potentially overload the distribution systems. This becomes prominent in locations with multiple EV chargers and charging stations with many EVs. Therefore, this study proposes a welfare maximization-based soft actor critic (SAC) model to mitigate transformer overload in distribution systems due to the high penetration of EVs. The demand of each charging station is managed locally to avoid network overload during peak load hours in two steps. First, a welfare maximization-based optimization model is developed to maximize the welfare of electric vehicle owners by performing vehicle-to-vehicle(V2V) service. In this step, the sensitivity of EV owners to different parameters (energy level, battery degradation, and incentives provided by fleet operators) is considered. Then, a deep reinforcement learning-based method (soft-actor critic) is trained by incorporating the welfare value (obtained from the welfare maximization model) in the reward function. The total power demand (at the transformer level) and transformer capacity are also included in the reward function. The agent (fleet operator) learns the optimal pricing strategy for local demand management of EVs by interacting with the environment. Each electric vehicle responds to the action (price) by deciding the amount of power they are willing to charge/discharge (V2V) during that interval. Training is performed offline, and the trained model can be used for real-time demand management of different types of charging stations. The simulation results have shown that the proposed method can successfully manage the demand of different charging stations, via V2V, without violating the transformer capacity limits.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
20201 - Electrical and electronic engineering
Návaznosti výsledku
Projekt
—
Návaznosti
I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
ETRANSPORTATION
ISSN
2590-1168
e-ISSN
2590-1168
Svazek periodika
18
Číslo periodika v rámci svazku
OCTOBER
Stát vydavatele periodika
NL - Nizozemsko
Počet stran výsledku
12
Strana od-do
"Article Number:100280"
Kód UT WoS článku
001072475200001
EID výsledku v databázi Scopus
2-s2.0-85171610377