• Login
    View Item 
    •   Home
    • Theses & Dissertations
    • 2020 - Mines Theses & Dissertations
    • View Item
    •   Home
    • Theses & Dissertations
    • 2020 - Mines Theses & Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Mines RepositoryCommunitiesPublication DateAuthorsTitlesSubjectsThis CollectionPublication DateAuthorsTitlesSubjects

    My Account

    Login

    Mines Links

    Arthur Lakes LibraryColorado School of Mines

    Statistics

    Display Statistics

    Distributed reinforcement learning framework for wind farm energy capture maximization, A

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    Stanfel_mines_0052N_12100.pdf
    Size:
    6.040Mb
    Format:
    PDF
    Download
    Thumbnail
    Name:
    supplemental.zip
    Size:
    136.4Kb
    Format:
    Unknown
    Download
    Author
    Stanfel, Paul A.
    Advisor
    Johnson, Kathryn E.
    Date issued
    2020
    Keywords
    reinforcement learning
    distributed optimization
    wind farms
    
    Metadata
    Show full item record
    URI
    https://hdl.handle.net/11124/176319
    Abstract
    In this thesis, we present a distributed reinforcement learning framework for wind farm energy capture maximization using yaw control, also known as wake steering. Specifically, we propose a variant of the Q-Learning algorithm with a reward signal based on the aggregated power levels of nearby turbines to achieve non-greedy turbine agent behavior. This algorithm establishes a framework for a closed-loop wind farm control approach that uses a simple control-oriented model to develop an approximation of the optimal control actions, and then adapts to the environment, using a combination of model-based and model-free, data-driven concepts to optimize wind farm energy production. We evaluate various implementations of the Q-Learning algorithm to evaluate the most computationally efficient and consistent method to train the agents so as to operate optimally in the field, as well as adapt the algorithm to operate in a turbulent wind input environment. Using these concepts we develop a complete RL framework for energy maximization. Additionally, we also describe our modifications to a widely used steady-state wind farm simulation package to approximate dynamic wake propagation effects, and test the RL framework using this dynamic simulation package.
    Rights
    Copyright of the original work is retained by the author.
    Collections
    2020 - Mines Theses & Dissertations

    entitlement

     
    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.