In the current era, the automotive sector stands as a cornerstone of the national economic framework. The progress and breakthroughs in automotive technology have ushered in an era of comfort and affluence. However, the prevalent use of automobiles also yields substantial emissions of deleterious gases, contaminating our living spaces and depleting the planet's finite and precious oil reserves. Addressing the energy and environmental challenges posed by the automotive industry, while simultaneously fostering its growth, presents a critical scientific and technological conundrum that demands resolution within the automotive domain and indeed across society at large. Consequently, the pursuit of new energy vehicles has become an imperative for automakers. Amid the trio of prevailing technological pathways for new energy vehicles, hybrid technology emerges as the most developed, boasting superior overall performance, and has thus garnered the favor of the automotive industry. The control algorithms and real-time simulation of hybrid electric vehicles have consistently been at the epicenter of automotive industry research. However, the most widely used control strategy is still based on expert experience. Because the hybrid electric vehicle has a complex structure, and its operating conditions are usually unknown, it is not easy to design an adaptive and self-renewal control algorithm applied to the hybrid electric vehicle. Based on the deep reinforcement learning algorithm, this paper studies the control algorithm of parallel hybrid electric vehicle and real-time simulation test. The comparative experiments were carried out under NEDC working condition. The experimental results show that the control algorithm based on deep reinforcement learning has better energy-saving and control effect than the other two algorithms under the condition of full learning and training, and the algorithm has the potential to realize the general algorithm because of its self-adaptive and self-learning characteristics.
|