CRII: CPS: Towards a Model-Based Reinforcement Learning Approach for Safe Operation of Distributed Energy Systems
- View All
With the increasing penetration of renewables on the electric grid and ready availability of real-time data about electricity usage, the electric power grid is becoming a large-scale complex Cyber-Physical System (CPS) to meet consumer demands for electricity through the day, every day. Reinforcement learning (RL) algorithms offer these CPS systems an approach to seamlessly integrating distributed energy sources into the legacy electric grid more efficiently, effectively and affordably. It also offers significant potential savings in capital investment cost and labor, and greater resiliency to disruptions in service.This research project develops a framework for model-based online reinforcement learning to address several classes of problems. First, it models control of energy CPS as finite horizon RL problems. Second, instead of focusing on asymptotic convergence, this project focuses on optimal finite time performance. Third, while a simplistic learning algorithm might drive an energy CPS to an unsafe region of operations, thereby risking unwanted consequences, this project develops safe RL algorithms that optimize performance and respect safety constraints. Fourth, this project exploits the physical properties of the energy CPS to avoid the dimensionality problems, often associated with RL problems. Lastly, the project develops sequential algorithms using a "contextual bandits" approach for learning consumer specific parameters and adaptively scheduling to account for consumer usage.This award reflects NSF''s statutory mission and has been deemed worthy of support through evaluation using the Foundation''s intellectual merit and broader impacts review criteria.