Reinforcement learning for optimal control of low exergy buildings. Yang, L., Nagy, Z., Goffin, P., & Schlueter, A. Applied Energy, 156:577-586, 2015. abstract bibtex Implementation of reinforcement learning control for LowEx Building systems. Learning allows adaptation to local environment without prior knowledge. Presentation of reinforcement learning control for real-life applications. Discussion of the applicability for real-life situations. a b s t r a c t Over a third of the anthropogenic greenhouse gas (GHG) emissions stem from cooling and heating build-ings, due to their fossil fuel based operation. Low exergy building systems are a promising approach to reduce energy consumption as well as GHG emissions. They consists of renewable energy technologies, such as PV, PV/T and heat pumps. Since careful tuning of parameters is required, a manual setup may result in sub-optimal operation. A model predictive control approach is unnecessarily complex due to the required model identification. Therefore, in this work we present a reinforcement learning control (RLC) approach. The studied building consists of a PV/T array for solar heat and electricity generation, as well as geothermal heat pumps. We present RLC for the PV/T array, and the full building model. Two methods, Tabular Q-learning and Batch Q-learning with Memory Replay, are implemented with real building settings and actual weather conditions in a Matlab/Simulink framework. The performance is evaluated against standard rule-based control (RBC). We investigated different neural network structures and find that some outperformed RBC already during the learning phase. Overall, every RLC strategy for PV/T outperformed RBC by over 10% after the third year. Likewise, for the full building, RLC outperforms RBC in terms of meeting the heating demand, maintaining the optimal operation temperature and com-pensating more effectively for ground heat. This allows to reduce engineering costs associated with the setup of these systems, as well as decrease the return-of-invest period, both of which are necessary to create a sustainable, zero-emission building stock.
@article{
title = {Reinforcement learning for optimal control of low exergy buildings},
type = {article},
year = {2015},
keywords = {Energy efficient buildings,Low exergy building systems,Reinforcement learning control,Sustainable building systems,Zero net energy buildings,low exergy building systems,reinforcement learning control,zero net energy buildings},
pages = {577-586},
volume = {156},
id = {5c819548-ddff-3c10-8756-1b67c50895fe},
created = {2015-10-14T13:06:24.000Z},
file_attached = {false},
profile_id = {930c6fc3-4f7d-3268-8cb7-e7bbdd7b5ce3},
last_modified = {2017-04-11T12:49:41.482Z},
read = {true},
starred = {false},
authored = {true},
confirmed = {true},
hidden = {false},
citation_key = {yang15},
private_publication = {false},
abstract = {Implementation of reinforcement learning control for LowEx Building systems. Learning allows adaptation to local environment without prior knowledge. Presentation of reinforcement learning control for real-life applications. Discussion of the applicability for real-life situations. a b s t r a c t Over a third of the anthropogenic greenhouse gas (GHG) emissions stem from cooling and heating build-ings, due to their fossil fuel based operation. Low exergy building systems are a promising approach to reduce energy consumption as well as GHG emissions. They consists of renewable energy technologies, such as PV, PV/T and heat pumps. Since careful tuning of parameters is required, a manual setup may result in sub-optimal operation. A model predictive control approach is unnecessarily complex due to the required model identification. Therefore, in this work we present a reinforcement learning control (RLC) approach. The studied building consists of a PV/T array for solar heat and electricity generation, as well as geothermal heat pumps. We present RLC for the PV/T array, and the full building model. Two methods, Tabular Q-learning and Batch Q-learning with Memory Replay, are implemented with real building settings and actual weather conditions in a Matlab/Simulink framework. The performance is evaluated against standard rule-based control (RBC). We investigated different neural network structures and find that some outperformed RBC already during the learning phase. Overall, every RLC strategy for PV/T outperformed RBC by over 10% after the third year. Likewise, for the full building, RLC outperforms RBC in terms of meeting the heating demand, maintaining the optimal operation temperature and com-pensating more effectively for ground heat. This allows to reduce engineering costs associated with the setup of these systems, as well as decrease the return-of-invest period, both of which are necessary to create a sustainable, zero-emission building stock.},
bibtype = {article},
author = {Yang, Lei and Nagy, Zoltan and Goffin, Philippe and Schlueter, Arno},
journal = {Applied Energy}
}
Downloads: 0
{"_id":"W7KZGW5qToAJABsWX","bibbaseid":"yang-nagy-goffin-schlueter-reinforcementlearningforoptimalcontroloflowexergybuildings-2015","downloads":0,"creationDate":"2015-10-14T15:03:41.066Z","title":"Reinforcement learning for optimal control of low exergy buildings","author_short":["Yang, L.","Nagy, Z.","Goffin, P.","Schlueter, A."],"year":2015,"bibtype":"article","biburl":null,"bibdata":{"title":"Reinforcement learning for optimal control of low exergy buildings","type":"article","year":"2015","keywords":"Energy efficient buildings,Low exergy building systems,Reinforcement learning control,Sustainable building systems,Zero net energy buildings,low exergy building systems,reinforcement learning control,zero net energy buildings","pages":"577-586","volume":"156","id":"5c819548-ddff-3c10-8756-1b67c50895fe","created":"2015-10-14T13:06:24.000Z","file_attached":false,"profile_id":"930c6fc3-4f7d-3268-8cb7-e7bbdd7b5ce3","last_modified":"2017-04-11T12:49:41.482Z","read":"true","starred":false,"authored":"true","confirmed":"true","hidden":false,"citation_key":"yang15","private_publication":false,"abstract":"Implementation of reinforcement learning control for LowEx Building systems. Learning allows adaptation to local environment without prior knowledge. Presentation of reinforcement learning control for real-life applications. Discussion of the applicability for real-life situations. a b s t r a c t Over a third of the anthropogenic greenhouse gas (GHG) emissions stem from cooling and heating build-ings, due to their fossil fuel based operation. Low exergy building systems are a promising approach to reduce energy consumption as well as GHG emissions. They consists of renewable energy technologies, such as PV, PV/T and heat pumps. Since careful tuning of parameters is required, a manual setup may result in sub-optimal operation. A model predictive control approach is unnecessarily complex due to the required model identification. Therefore, in this work we present a reinforcement learning control (RLC) approach. The studied building consists of a PV/T array for solar heat and electricity generation, as well as geothermal heat pumps. We present RLC for the PV/T array, and the full building model. Two methods, Tabular Q-learning and Batch Q-learning with Memory Replay, are implemented with real building settings and actual weather conditions in a Matlab/Simulink framework. The performance is evaluated against standard rule-based control (RBC). We investigated different neural network structures and find that some outperformed RBC already during the learning phase. Overall, every RLC strategy for PV/T outperformed RBC by over 10% after the third year. Likewise, for the full building, RLC outperforms RBC in terms of meeting the heating demand, maintaining the optimal operation temperature and com-pensating more effectively for ground heat. This allows to reduce engineering costs associated with the setup of these systems, as well as decrease the return-of-invest period, both of which are necessary to create a sustainable, zero-emission building stock.","bibtype":"article","author":"Yang, Lei and Nagy, Zoltan and Goffin, Philippe and Schlueter, Arno","journal":"Applied Energy","bibtex":"@article{\n title = {Reinforcement learning for optimal control of low exergy buildings},\n type = {article},\n year = {2015},\n keywords = {Energy efficient buildings,Low exergy building systems,Reinforcement learning control,Sustainable building systems,Zero net energy buildings,low exergy building systems,reinforcement learning control,zero net energy buildings},\n pages = {577-586},\n volume = {156},\n id = {5c819548-ddff-3c10-8756-1b67c50895fe},\n created = {2015-10-14T13:06:24.000Z},\n file_attached = {false},\n profile_id = {930c6fc3-4f7d-3268-8cb7-e7bbdd7b5ce3},\n last_modified = {2017-04-11T12:49:41.482Z},\n read = {true},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {yang15},\n private_publication = {false},\n abstract = {Implementation of reinforcement learning control for LowEx Building systems. Learning allows adaptation to local environment without prior knowledge. Presentation of reinforcement learning control for real-life applications. Discussion of the applicability for real-life situations. a b s t r a c t Over a third of the anthropogenic greenhouse gas (GHG) emissions stem from cooling and heating build-ings, due to their fossil fuel based operation. Low exergy building systems are a promising approach to reduce energy consumption as well as GHG emissions. They consists of renewable energy technologies, such as PV, PV/T and heat pumps. Since careful tuning of parameters is required, a manual setup may result in sub-optimal operation. A model predictive control approach is unnecessarily complex due to the required model identification. Therefore, in this work we present a reinforcement learning control (RLC) approach. The studied building consists of a PV/T array for solar heat and electricity generation, as well as geothermal heat pumps. We present RLC for the PV/T array, and the full building model. Two methods, Tabular Q-learning and Batch Q-learning with Memory Replay, are implemented with real building settings and actual weather conditions in a Matlab/Simulink framework. The performance is evaluated against standard rule-based control (RBC). We investigated different neural network structures and find that some outperformed RBC already during the learning phase. Overall, every RLC strategy for PV/T outperformed RBC by over 10% after the third year. Likewise, for the full building, RLC outperforms RBC in terms of meeting the heating demand, maintaining the optimal operation temperature and com-pensating more effectively for ground heat. This allows to reduce engineering costs associated with the setup of these systems, as well as decrease the return-of-invest period, both of which are necessary to create a sustainable, zero-emission building stock.},\n bibtype = {article},\n author = {Yang, Lei and Nagy, Zoltan and Goffin, Philippe and Schlueter, Arno},\n journal = {Applied Energy}\n}","author_short":["Yang, L.","Nagy, Z.","Goffin, P.","Schlueter, A."],"bibbaseid":"yang-nagy-goffin-schlueter-reinforcementlearningforoptimalcontroloflowexergybuildings-2015","role":"author","urls":{},"keyword":["Energy efficient buildings","Low exergy building systems","Reinforcement learning control","Sustainable building systems","Zero net energy buildings","low exergy building systems","reinforcement learning control","zero net energy buildings"],"downloads":0},"search_terms":["reinforcement","learning","optimal","control","low","exergy","buildings","yang","nagy","goffin","schlueter"],"keywords":["energy efficient buildings","low exergy building systems","reinforcement learning control","sustainable building systems","zero net energy buildings","low exergy building systems","reinforcement learning control","zero net energy buildings"],"authorIDs":["561e5539f3a2082853000232"]}