Hide metadata

dc.contributor.authorHolm, Marius
dc.date.accessioned2020-09-07T23:46:53Z
dc.date.available2020-09-07T23:46:53Z
dc.date.issued2020
dc.identifier.citationHolm, Marius. Using Deep Reinforcement Learning for Active Flow Control. Master thesis, University of Oslo, 2020
dc.identifier.urihttp://hdl.handle.net/10852/79212
dc.description.abstractnob
dc.description.abstractWe apply deep reinforcement learning (DRL) to reduce and increase the drag of a 2-dimensional wake flow around a cluster of three equidistantly spaced cylinders, known as the fluidic pinball. The flow is controlled by rotating the cylinders according to input provided by a DRL agent or a pre-determined control function. Simulations are carried out for two Reynolds numbers (Res), Re = 100 and 150, corresponding to a periodic asymmetric and chaos symmetric flow respectively. At Re = 100 DRL agents are able to reduce drag by up to ≈ 28% and increase drag by ≈ 45% compared to the baseline flow with no applied control. For the chaotic flow at Re = 150 DRL agents are able to reduce the drag by up to ≈ 32% and increase drag by up to ≈ 65% compared to the baseline flow. Deep reinforcement learning combines artificial neural networks (ANNs) with reinforcement learning (RL) architecture that enables an agent to learn the best actions by interacting with an environment. Reinforcement learning (RL) refers to goal-oriented algorithms that can learn to achieve a complex goal by interacting with an environment, i.e. by trial-and-error. Artificial neural networks are used as function approximators for the reinforcement learning policy and/or value function. The ANN is trained to be the best possible approximation to the target function by a gradient descent (GD) optimization algorithm. This especially effective for complex systems where the possible states of the system, and all possible actions are too large to be completely known. In the thesis we implemented a DRL agent based on the proximal policy optimization (PPO) algorithm together with a fully connected neural network (FCNN) to control the rotations of the cylinders. We also compare the DRL strategies with simpler strategies like constant rotations and pre-determined sinusoidal control functions.eng
dc.language.isonob
dc.subjectdeep learning
dc.subjectoptimization
dc.subjectReinforcement learning
dc.subjectactive flow control
dc.subjectneural networks
dc.subjectdeep reinforcement learning
dc.subjectmachine learning
dc.subjectfluid mechanics
dc.subjectflow control
dc.subjectartificial intelligence
dc.titleUsing Deep Reinforcement Learning for Active Flow Controlnob
dc.title.alternativeUsing Deep Reinforcement Learning for Active Flow Controleng
dc.typeMaster thesis
dc.date.updated2020-09-08T23:46:54Z
dc.creator.authorHolm, Marius
dc.identifier.urnURN:NBN:no-82314
dc.type.documentMasteroppgave
dc.identifier.fulltextFulltext https://www.duo.uio.no/bitstream/handle/10852/79212/1/main.pdf


Files in this item

Appears in the following Collection

Hide metadata