Reinforcement learning for optimal control of low exergy buildings. Yang, L., Nagy, Z., Goffin, P., Schlueter, A., Yang L., Nagy Z., Goffin Ph., & Schlueter, A. Applied Energy, 156:577-586, Elsevier Ltd, 10, 2015.
abstract   bibtex   
Implementation of reinforcement learning control for LowEx Building systems. Learning allows adaptation to local environment without prior knowledge. Presentation of reinforcement learning control for real-life applications. Discussion of the applicability for real-life situations. a b s t r a c t Over a third of the anthropogenic greenhouse gas (GHG) emissions stem from cooling and heating build-ings, due to their fossil fuel based operation. Low exergy building systems are a promising approach to reduce energy consumption as well as GHG emissions. They consists of renewable energy technologies, such as PV, PV/T and heat pumps. Since careful tuning of parameters is required, a manual setup may result in sub-optimal operation. A model predictive control approach is unnecessarily complex due to the required model identification. Therefore, in this work we present a reinforcement learning control (RLC) approach. The studied building consists of a PV/T array for solar heat and electricity generation, as well as geothermal heat pumps. We present RLC for the PV/T array, and the full building model. Two methods, Tabular Q-learning and Batch Q-learning with Memory Replay, are implemented with real building settings and actual weather conditions in a Matlab/Simulink framework. The performance is evaluated against standard rule-based control (RBC). We investigated different neural network structures and find that some outperformed RBC already during the learning phase. Overall, every RLC strategy for PV/T outperformed RBC by over 10% after the third year. Likewise, for the full building, RLC outperforms RBC in terms of meeting the heating demand, maintaining the optimal operation temperature and com-pensating more effectively for ground heat. This allows to reduce engineering costs associated with the setup of these systems, as well as decrease the return-of-invest period, both of which are necessary to create a sustainable, zero-emission building stock.
@article{
 title = {Reinforcement learning for optimal control of low exergy buildings},
 type = {article},
 year = {2015},
 identifiers = {[object Object]},
 keywords = {Energy efficient buildings,Low exergy building systems,Reinforcement learning control,Sustainable building systems,Zero net energy buildings,low exergy building systems,reinforcement learning control,zero net energy buildings},
 pages = {577-586},
 volume = {156},
 websites = {http://linkinghub.elsevier.com/retrieve/pii/S030626191500879X},
 month = {10},
 publisher = {Elsevier Ltd},
 id = {5c819548-ddff-3c10-8756-1b67c50895fe},
 created = {2015-10-14T13:06:24.000Z},
 file_attached = {false},
 profile_id = {930c6fc3-4f7d-3268-8cb7-e7bbdd7b5ce3},
 last_modified = {2015-10-14T13:06:24.000Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {Yang2015},
 source_type = {article},
 abstract = {Implementation of reinforcement learning control for LowEx Building systems. Learning allows adaptation to local environment without prior knowledge. Presentation of reinforcement learning control for real-life applications. Discussion of the applicability for real-life situations. a b s t r a c t Over a third of the anthropogenic greenhouse gas (GHG) emissions stem from cooling and heating build-ings, due to their fossil fuel based operation. Low exergy building systems are a promising approach to reduce energy consumption as well as GHG emissions. They consists of renewable energy technologies, such as PV, PV/T and heat pumps. Since careful tuning of parameters is required, a manual setup may result in sub-optimal operation. A model predictive control approach is unnecessarily complex due to the required model identification. Therefore, in this work we present a reinforcement learning control (RLC) approach. The studied building consists of a PV/T array for solar heat and electricity generation, as well as geothermal heat pumps. We present RLC for the PV/T array, and the full building model. Two methods, Tabular Q-learning and Batch Q-learning with Memory Replay, are implemented with real building settings and actual weather conditions in a Matlab/Simulink framework. The performance is evaluated against standard rule-based control (RBC). We investigated different neural network structures and find that some outperformed RBC already during the learning phase. Overall, every RLC strategy for PV/T outperformed RBC by over 10% after the third year. Likewise, for the full building, RLC outperforms RBC in terms of meeting the heating demand, maintaining the optimal operation temperature and com-pensating more effectively for ground heat. This allows to reduce engineering costs associated with the setup of these systems, as well as decrease the return-of-invest period, both of which are necessary to create a sustainable, zero-emission building stock.},
 bibtype = {article},
 author = {Yang, Lei and Nagy, Zoltan and Goffin, Philippe and Schlueter, Arno and Yang L., undefined and Nagy Z., undefined and Goffin Ph., undefined and Schlueter, Arno},
 journal = {Applied Energy}
}

Downloads: 0