Loading...
Thumbnail Image
Publication

Distributed reinforcement learning framework for wind farm energy capture maximization, A

Stanfel, Paul A.
Research Projects
Organizational Units
Journal Issue
Embargo Expires
Abstract
In this thesis, we present a distributed reinforcement learning framework for wind farm energy capture maximization using yaw control, also known as wake steering. Specifically, we propose a variant of the Q-Learning algorithm with a reward signal based on the aggregated power levels of nearby turbines to achieve non-greedy turbine agent behavior. This algorithm establishes a framework for a closed-loop wind farm control approach that uses a simple control-oriented model to develop an approximation of the optimal control actions, and then adapts to the environment, using a combination of model-based and model-free, data-driven concepts to optimize wind farm energy production. We evaluate various implementations of the Q-Learning algorithm to evaluate the most computationally efficient and consistent method to train the agents so as to operate optimally in the field, as well as adapt the algorithm to operate in a turbulent wind input environment. Using these concepts we develop a complete RL framework for energy maximization. Additionally, we also describe our modifications to a widely used steady-state wind farm simulation package to approximate dynamic wake propagation effects, and test the RL framework using this dynamic simulation package.
Associated Publications
Rights
Copyright of the original work is retained by the author.
Embedded videos