Calibration and Validation of Microscopic Traffic Flow Models. Brockfeld, E. & Wagner, P. In Bovy, P. H. L., Hoogendoorn, S. P., Schreckenberg, M., & Wolf, D. E., editors, Traffic and Granular Flow '03, 2003. Springer.
Paper abstract bibtex Microscopic simulation models are becoming increasingly important tools in modeling transport systems. There are a large number of available models used in many countries. The most difficult stage in the development and use of such models is the calibration and validation of the microscopic sub-models describing the traffic flow, such as the car following, lane changing and gap acceptance models. This difficulty is due to the lack of suitable methods for adapting models to empirical data. The aim of this paper is to present recent progress in calibrating a number of microscopic traffic flow models. By calibrating and validating various models using the same data sets, the models are directly comparable to each other. This sets the basis for a transparent benchmarking of those models. Furthermore, the advantages and disadvantages of each model can be analyzed better to develop a more realistic behavior of the simulated vehicles. In this work various microscopic traffic flow models have been tested from a very microscopic point of view concerning the car-following behavior and gap-acceptance. The data used for calibration and validation is from car-following experiments performed in Japan in October 2001. The data have been collected by letting nine DGPS-equipped cars follow a lead car driving along a 3 km test track for about 15-30 minutes. So one gets the positions and speeds of each car in time intervals of 0.1 seconds. The experiment was repeated eight times letting the leading driver perform various driving patterns as there are constant speeds of 20, 40, 60 and 80 km/h for some time, driving in waves and emulating many accelerations/decelerations as they are typical at intersections. To minimize driver-dependent correlations between the data sets, the drivers were exchanged between the cars regularly after each experiment. In this paper we present analyses concerning four of the experiments, namely the patterns mostly with intervals of constant speeds and wave-performing. For each of the four experiments one gets the ten trajectories of the cars in form of the DGPS-positions and speeds. From these the accelerations and distances/gaps between the cars have been calculated, which are used then for the simulation runs.
The study was done analyzing the time-development of the gaps between the cars. For the simulation setup only two cars are considered at a time. The leading car is updated as the speeds in the recorded data sets tell and the following car is updated as defined by the equations and rules of the used model, respectively. The absolute error a model produces is calculated via the simple quadratic distance between the recorded gaps and the simulated gaps. To get a percentage error it is additionally related to the mean average gap in each data set. Altogether 36 vehicle pairs (4 experiments * 9 vehicle pairs) were used as data sets for the analyses. Each model has been calibrated with each of the 36 different constellations separately gaining optimal parameter sets for each ?model - data set? combination. To find the optimal parameter constellations a gradient-free optimization method was used and started several times with different initialization values for each ?model - data set? pair. The variation in initialization is done to avoid sticking with a local minimum, which of course can occur because getting a global minimum can not be guaranteed by those type of optimization algorithms. Subsequently, the validation was performed by determining the error of a given model on all the data sets which have not been used to calibrate the model. By now, ten microscopic models of a very different kind using 3 to 14 parameters have been tested. The most basic parameters used by the models are the car length, a maximum velocity, an acceleration and mostly a deceleration rate. The acceleration and deceleration rates are specified in more detail in some models depending on the recent speed or traffic states (indicated by density for example). Furthermore, some models use a parameter for random braking or another kind of stochastic parameter describing individual driver behavior. Finally, few models use much more parameters to describe the driver?s behavior, which will be briefly described in the final paper. As the time step for the models is 0.1 seconds according to the recorded data, some models with a traditional time step of 1 second ? as for example used for simple cellular automatons - have been modified to adopt for an arbitrarily small time-step. So far the models tested are as follows (more will be added): - CA (cellular automaton model by K. Nagel, M. Schreckenberg), - SK-model (model by S. Krauss), - OVM (?Optimal Velocity Model?, Bando, Hasebe), - IDM (?Intelligent Driver Model?, Helbing), - IDMM (?Intelligent Driver Model with Memory?, Helbing, Treiber), - CATauT (CA model with more variable acceleration and deceleration, own development), - GIPPSLIKE (basic model by P.G. Gipps), - Aerde (model used in the simulation package INTEGRATION), - FRITZSCHE (model used in the british software PARAMICS; it is similar to what is used in the german software VISSIM by PTV), - MitSim (model by Yang, Koutsopulus, used in the software MitSim). The error rates of the models in comparison to the data sets during the calibration for each model reach from 9 to 24 %. But no model appears to be significantly the best one since every model has the same problems with distinct data sets and other data sets can be simulated quite good with each model. Interestingly, it can be stated that models with more parameters than others do not necessarily reproduce the real data better. The results of the validation process draw a similar picture. The produced errors in these cases are about 12 to 30 %, sometimes up to 40 or 60%, which is of course much bigger than in the simple calibration cases. All in all the results after the calibration agree with some results that have been obtained before. But the results of the validation are in parts very bad which probably calls for the development of much better models. The other way to interpret the results is that ? from this microscopic point of view ? errors of about 12-30 % can probably not be suppressed no matter what a model is used. This would be due to the different behavior of each driver.
@inproceedings{Brockfeld2003,
author = {Elmar Brockfeld and Peter Wagner},
booktitle = {Traffic and Granular Flow '03},
title = {Calibration and Validation of Microscopic Traffic Flow Models},
year = {2003},
editor = {P. H. L. Bovy and S. P. Hoogendoorn and M. Schreckenberg and D. E. Wolf},
publisher = {Springer},
abstract = {Microscopic simulation models are becoming increasingly important
tools in modeling transport systems. There are a large number of
available models used in many countries. The most difficult stage
in the development and use of such models is the calibration and
validation of the microscopic sub-models describing the traffic flow,
such as the car following, lane changing and gap acceptance models.
This difficulty is due to the lack of suitable methods for adapting
models to empirical data. The aim of this paper is to present recent
progress in calibrating a number of microscopic traffic flow models.
By calibrating and validating various models using the same data
sets, the models are directly comparable to each other. This sets
the basis for a transparent benchmarking of those models. Furthermore,
the advantages and disadvantages of each model can be analyzed better
to develop a more realistic behavior of the simulated vehicles.
In this work various microscopic traffic flow models have been tested
from a very microscopic point of view concerning the car-following
behavior and gap-acceptance. The data used for calibration and validation
is from car-following experiments performed in Japan in October 2001.
The data have been collected by letting nine DGPS-equipped cars follow
a lead car driving along a 3 km test track for about 15-30 minutes.
So one gets the positions and speeds of each car in time intervals
of 0.1 seconds. The experiment was repeated eight times letting the
leading driver perform various driving patterns as there are constant
speeds of 20, 40, 60 and 80 km/h for some time, driving in waves
and emulating many accelerations/decelerations as they are typical
at intersections. To minimize driver-dependent correlations between
the data sets, the drivers were exchanged between the cars regularly
after each experiment.
In this paper we present analyses concerning four of the experiments,
namely the patterns mostly with intervals of constant speeds and
wave-performing. For each of the four experiments one gets the ten
trajectories of the cars in form of the DGPS-positions and speeds.
From these the accelerations and distances/gaps between the cars
have been calculated, which are used then for the simulation runs.<br/>
The study was done analyzing the time-development of the gaps between
the cars. For the simulation setup only two cars are considered at
a time. The leading car is updated as the speeds in the recorded
data sets tell and the following car is updated as defined by the
equations and rules of the used model, respectively. The absolute
error a model produces is calculated via the simple quadratic distance
between the recorded gaps and the simulated gaps. To get a percentage
error it is additionally related to the mean average gap in each
data set. Altogether 36 vehicle pairs (4 experiments * 9 vehicle
pairs) were used as data sets for the analyses.
Each model has been calibrated with each of the 36 different constellations
separately gaining optimal parameter sets for each ?model - data
set? combination. To find the optimal parameter constellations a
gradient-free optimization method was used and started several times
with different initialization values for each ?model - data set?
pair. The variation in initialization is done to avoid sticking with
a local minimum, which of course can occur because getting a global
minimum can not be guaranteed by those type of optimization algorithms.
Subsequently, the validation was performed by determining the error
of a given model on all the data sets which have not been used to
calibrate the model.
By now, ten microscopic models of a very different kind using 3 to
14 parameters have been tested. The most basic parameters used by
the models are the car length, a maximum velocity, an acceleration
and mostly a deceleration rate. The acceleration and deceleration
rates are specified in more detail in some models depending on the
recent speed or traffic states (indicated by density for example).
Furthermore, some models use a parameter for random braking or another
kind of stochastic parameter describing individual driver behavior.
Finally, few models use much more parameters to describe the driver?s
behavior, which will be briefly described in the final paper. As
the time step for the models is 0.1 seconds according to the recorded
data, some models with a traditional time step of 1 second ? as for
example used for simple cellular automatons - have been modified
to adopt for an arbitrarily small time-step. So far the models tested
are as follows (more will be added): - CA (cellular automaton model
by K. Nagel, M. Schreckenberg), - SK-model (model by S. Krauss),
- OVM (?Optimal Velocity Model?, Bando, Hasebe), - IDM (?Intelligent
Driver Model?, Helbing), - IDMM (?Intelligent Driver Model with Memory?,
Helbing, Treiber), - CATauT (CA model with more variable acceleration
and deceleration, own development), - GIPPSLIKE (basic model by P.G.
Gipps), - Aerde (model used in the simulation package INTEGRATION),
- FRITZSCHE (model used in the british software PARAMICS; it is similar
to what is used in the german software VISSIM by PTV), - MitSim (model
by Yang, Koutsopulus, used in the software MitSim).
The error rates of the models in comparison to the data sets during
the calibration for each model reach from 9 to 24 %. But no model
appears to be significantly the best one since every model has the
same problems with distinct data sets and other data sets can be
simulated quite good with each model. Interestingly, it can be stated
that models with more parameters than others do not necessarily reproduce
the real data better. The results of the validation process draw
a similar picture. The produced errors in these cases are about 12
to 30 %, sometimes up to 40 or 60%, which is of course much bigger
than in the simple calibration cases. All in all the results after
the calibration agree with some results that have been obtained before.
But the results of the validation are in parts very bad which probably
calls for the development of much better models. The other way to
interpret the results is that ? from this microscopic point of view
? errors of about 12-30 % can probably not be suppressed no matter
what a model is used. This would be due to the different behavior
of each driver.},
groups = {calibration&validation, TS, assigned2groups},
journal = {Traffic and Granular Flow '03},
keywords = {calibration, validation, models, traffic flow models, microscopic, DLR/TS/VM, model calibration},
owner = {dkrajzew},
timestamp = {2011.09.30},
url = {http://elib.dlr.de/6653/}
}
Downloads: 0
{"_id":{"_str":"536d286d4c6bdacb28000336"},"__v":0,"authorIDs":[],"author_short":["Brockfeld, E.","Wagner, P."],"bibbaseid":"brockfeld-wagner-calibrationandvalidationofmicroscopictrafficflowmodels-2003","bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["Elmar"],"propositions":[],"lastnames":["Brockfeld"],"suffixes":[]},{"firstnames":["Peter"],"propositions":[],"lastnames":["Wagner"],"suffixes":[]}],"booktitle":"Traffic and Granular Flow '03","title":"Calibration and Validation of Microscopic Traffic Flow Models","year":"2003","editor":[{"firstnames":["P.","H.","L."],"propositions":[],"lastnames":["Bovy"],"suffixes":[]},{"firstnames":["S.","P."],"propositions":[],"lastnames":["Hoogendoorn"],"suffixes":[]},{"firstnames":["M."],"propositions":[],"lastnames":["Schreckenberg"],"suffixes":[]},{"firstnames":["D.","E."],"propositions":[],"lastnames":["Wolf"],"suffixes":[]}],"publisher":"Springer","abstract":"Microscopic simulation models are becoming increasingly important tools in modeling transport systems. There are a large number of available models used in many countries. The most difficult stage in the development and use of such models is the calibration and validation of the microscopic sub-models describing the traffic flow, such as the car following, lane changing and gap acceptance models. This difficulty is due to the lack of suitable methods for adapting models to empirical data. The aim of this paper is to present recent progress in calibrating a number of microscopic traffic flow models. By calibrating and validating various models using the same data sets, the models are directly comparable to each other. This sets the basis for a transparent benchmarking of those models. Furthermore, the advantages and disadvantages of each model can be analyzed better to develop a more realistic behavior of the simulated vehicles. In this work various microscopic traffic flow models have been tested from a very microscopic point of view concerning the car-following behavior and gap-acceptance. The data used for calibration and validation is from car-following experiments performed in Japan in October 2001. The data have been collected by letting nine DGPS-equipped cars follow a lead car driving along a 3 km test track for about 15-30 minutes. So one gets the positions and speeds of each car in time intervals of 0.1 seconds. The experiment was repeated eight times letting the leading driver perform various driving patterns as there are constant speeds of 20, 40, 60 and 80 km/h for some time, driving in waves and emulating many accelerations/decelerations as they are typical at intersections. To minimize driver-dependent correlations between the data sets, the drivers were exchanged between the cars regularly after each experiment. In this paper we present analyses concerning four of the experiments, namely the patterns mostly with intervals of constant speeds and wave-performing. For each of the four experiments one gets the ten trajectories of the cars in form of the DGPS-positions and speeds. From these the accelerations and distances/gaps between the cars have been calculated, which are used then for the simulation runs.<br/> The study was done analyzing the time-development of the gaps between the cars. For the simulation setup only two cars are considered at a time. The leading car is updated as the speeds in the recorded data sets tell and the following car is updated as defined by the equations and rules of the used model, respectively. The absolute error a model produces is calculated via the simple quadratic distance between the recorded gaps and the simulated gaps. To get a percentage error it is additionally related to the mean average gap in each data set. Altogether 36 vehicle pairs (4 experiments * 9 vehicle pairs) were used as data sets for the analyses. Each model has been calibrated with each of the 36 different constellations separately gaining optimal parameter sets for each ?model - data set? combination. To find the optimal parameter constellations a gradient-free optimization method was used and started several times with different initialization values for each ?model - data set? pair. The variation in initialization is done to avoid sticking with a local minimum, which of course can occur because getting a global minimum can not be guaranteed by those type of optimization algorithms. Subsequently, the validation was performed by determining the error of a given model on all the data sets which have not been used to calibrate the model. By now, ten microscopic models of a very different kind using 3 to 14 parameters have been tested. The most basic parameters used by the models are the car length, a maximum velocity, an acceleration and mostly a deceleration rate. The acceleration and deceleration rates are specified in more detail in some models depending on the recent speed or traffic states (indicated by density for example). Furthermore, some models use a parameter for random braking or another kind of stochastic parameter describing individual driver behavior. Finally, few models use much more parameters to describe the driver?s behavior, which will be briefly described in the final paper. As the time step for the models is 0.1 seconds according to the recorded data, some models with a traditional time step of 1 second ? as for example used for simple cellular automatons - have been modified to adopt for an arbitrarily small time-step. So far the models tested are as follows (more will be added): - CA (cellular automaton model by K. Nagel, M. Schreckenberg), - SK-model (model by S. Krauss), - OVM (?Optimal Velocity Model?, Bando, Hasebe), - IDM (?Intelligent Driver Model?, Helbing), - IDMM (?Intelligent Driver Model with Memory?, Helbing, Treiber), - CATauT (CA model with more variable acceleration and deceleration, own development), - GIPPSLIKE (basic model by P.G. Gipps), - Aerde (model used in the simulation package INTEGRATION), - FRITZSCHE (model used in the british software PARAMICS; it is similar to what is used in the german software VISSIM by PTV), - MitSim (model by Yang, Koutsopulus, used in the software MitSim). The error rates of the models in comparison to the data sets during the calibration for each model reach from 9 to 24 %. But no model appears to be significantly the best one since every model has the same problems with distinct data sets and other data sets can be simulated quite good with each model. Interestingly, it can be stated that models with more parameters than others do not necessarily reproduce the real data better. The results of the validation process draw a similar picture. The produced errors in these cases are about 12 to 30 %, sometimes up to 40 or 60%, which is of course much bigger than in the simple calibration cases. All in all the results after the calibration agree with some results that have been obtained before. But the results of the validation are in parts very bad which probably calls for the development of much better models. The other way to interpret the results is that ? from this microscopic point of view ? errors of about 12-30 % can probably not be suppressed no matter what a model is used. This would be due to the different behavior of each driver.","groups":"calibration&validation, TS, assigned2groups","journal":"Traffic and Granular Flow '03","keywords":"calibration, validation, models, traffic flow models, microscopic, DLR/TS/VM, model calibration","owner":"dkrajzew","timestamp":"2011.09.30","url":"http://elib.dlr.de/6653/","bibtex":"@inproceedings{Brockfeld2003,\n\tauthor = {Elmar Brockfeld and Peter Wagner},\n\tbooktitle = {Traffic and Granular Flow '03},\n\ttitle = {Calibration and Validation of Microscopic Traffic Flow Models},\n\tyear = {2003},\n\teditor = {P. H. L. Bovy and S. P. Hoogendoorn and M. Schreckenberg and D. E. Wolf},\n\tpublisher = {Springer},\n\tabstract = {Microscopic simulation models are becoming increasingly important\n\ttools in modeling transport systems. There are a large number of\n\tavailable models used in many countries. The most difficult stage\n\tin the development and use of such models is the calibration and\n\tvalidation of the microscopic sub-models describing the traffic flow,\n\tsuch as the car following, lane changing and gap acceptance models.\n\tThis difficulty is due to the lack of suitable methods for adapting\n\tmodels to empirical data. The aim of this paper is to present recent\n\tprogress in calibrating a number of microscopic traffic flow models.\n\tBy calibrating and validating various models using the same data\n\tsets, the models are directly comparable to each other. This sets\n\tthe basis for a transparent benchmarking of those models. Furthermore,\n\tthe advantages and disadvantages of each model can be analyzed better\n\tto develop a more realistic behavior of the simulated vehicles.\n\n\n\tIn this work various microscopic traffic flow models have been tested\n\tfrom a very microscopic point of view concerning the car-following\n\tbehavior and gap-acceptance. The data used for calibration and validation\n\tis from car-following experiments performed in Japan in October 2001.\n\tThe data have been collected by letting nine DGPS-equipped cars follow\n\ta lead car driving along a 3 km test track for about 15-30 minutes.\n\tSo one gets the positions and speeds of each car in time intervals\n\tof 0.1 seconds. The experiment was repeated eight times letting the\n\tleading driver perform various driving patterns as there are constant\n\tspeeds of 20, 40, 60 and 80 km/h for some time, driving in waves\n\tand emulating many accelerations/decelerations as they are typical\n\tat intersections. To minimize driver-dependent correlations between\n\tthe data sets, the drivers were exchanged between the cars regularly\n\tafter each experiment.\n\n\n\tIn this paper we present analyses concerning four of the experiments,\n\tnamely the patterns mostly with intervals of constant speeds and\n\twave-performing. For each of the four experiments one gets the ten\n\ttrajectories of the cars in form of the DGPS-positions and speeds.\n\tFrom these the accelerations and distances/gaps between the cars\n\thave been calculated, which are used then for the simulation runs.<br/>\n\n\tThe study was done analyzing the time-development of the gaps between\n\tthe cars. For the simulation setup only two cars are considered at\n\ta time. The leading car is updated as the speeds in the recorded\n\tdata sets tell and the following car is updated as defined by the\n\tequations and rules of the used model, respectively. The absolute\n\terror a model produces is calculated via the simple quadratic distance\n\tbetween the recorded gaps and the simulated gaps. To get a percentage\n\terror it is additionally related to the mean average gap in each\n\tdata set. Altogether 36 vehicle pairs (4 experiments * 9 vehicle\n\tpairs) were used as data sets for the analyses.\n\n\n\tEach model has been calibrated with each of the 36 different constellations\n\tseparately gaining optimal parameter sets for each ?model - data\n\tset? combination. To find the optimal parameter constellations a\n\tgradient-free optimization method was used and started several times\n\twith different initialization values for each ?model - data set?\n\tpair. The variation in initialization is done to avoid sticking with\n\ta local minimum, which of course can occur because getting a global\n\tminimum can not be guaranteed by those type of optimization algorithms.\n\tSubsequently, the validation was performed by determining the error\n\tof a given model on all the data sets which have not been used to\n\tcalibrate the model.\n\n\n\tBy now, ten microscopic models of a very different kind using 3 to\n\t14 parameters have been tested. The most basic parameters used by\n\tthe models are the car length, a maximum velocity, an acceleration\n\tand mostly a deceleration rate. The acceleration and deceleration\n\trates are specified in more detail in some models depending on the\n\trecent speed or traffic states (indicated by density for example).\n\tFurthermore, some models use a parameter for random braking or another\n\tkind of stochastic parameter describing individual driver behavior.\n\tFinally, few models use much more parameters to describe the driver?s\n\tbehavior, which will be briefly described in the final paper. As\n\tthe time step for the models is 0.1 seconds according to the recorded\n\tdata, some models with a traditional time step of 1 second ? as for\n\texample used for simple cellular automatons - have been modified\n\tto adopt for an arbitrarily small time-step. So far the models tested\n\tare as follows (more will be added): - CA (cellular automaton model\n\tby K. Nagel, M. Schreckenberg), - SK-model (model by S. Krauss),\n\t- OVM (?Optimal Velocity Model?, Bando, Hasebe), - IDM (?Intelligent\n\tDriver Model?, Helbing), - IDMM (?Intelligent Driver Model with Memory?,\n\tHelbing, Treiber), - CATauT (CA model with more variable acceleration\n\tand deceleration, own development), - GIPPSLIKE (basic model by P.G.\n\tGipps), - Aerde (model used in the simulation package INTEGRATION),\n\t- FRITZSCHE (model used in the british software PARAMICS; it is similar\n\tto what is used in the german software VISSIM by PTV), - MitSim (model\n\tby Yang, Koutsopulus, used in the software MitSim).\n\n\n\tThe error rates of the models in comparison to the data sets during\n\tthe calibration for each model reach from 9 to 24 %. But no model\n\tappears to be significantly the best one since every model has the\n\tsame problems with distinct data sets and other data sets can be\n\tsimulated quite good with each model. Interestingly, it can be stated\n\tthat models with more parameters than others do not necessarily reproduce\n\tthe real data better. The results of the validation process draw\n\ta similar picture. The produced errors in these cases are about 12\n\tto 30 %, sometimes up to 40 or 60%, which is of course much bigger\n\tthan in the simple calibration cases. All in all the results after\n\tthe calibration agree with some results that have been obtained before.\n\tBut the results of the validation are in parts very bad which probably\n\tcalls for the development of much better models. The other way to\n\tinterpret the results is that ? from this microscopic point of view\n\t? errors of about 12-30 % can probably not be suppressed no matter\n\twhat a model is used. This would be due to the different behavior\n\tof each driver.},\n\tgroups = {calibration&validation, TS, assigned2groups},\n\tjournal = {Traffic and Granular Flow '03},\n\tkeywords = {calibration, validation, models, traffic flow models, microscopic, DLR/TS/VM, model calibration},\n\towner = {dkrajzew},\n\ttimestamp = {2011.09.30},\n\turl = {http://elib.dlr.de/6653/}\n}\n\n\n","author_short":["Brockfeld, E.","Wagner, P."],"editor_short":["Bovy, P. H. L.","Hoogendoorn, S. P.","Schreckenberg, M.","Wolf, D. E."],"key":"Brockfeld2003","id":"Brockfeld2003","bibbaseid":"brockfeld-wagner-calibrationandvalidationofmicroscopictrafficflowmodels-2003","role":"author","urls":{"Paper":"http://elib.dlr.de/6653/"},"keyword":["calibration","validation","models","traffic flow models","microscopic","DLR/TS/VM","model calibration"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/eclipse/sumo/main/docs/sumo.bib","downloads":0,"keywords":["calibration","validation","models","traffic flow models","microscopic","dlr/ts/vm","model calibration"],"search_terms":["calibration","validation","microscopic","traffic","flow","models","brockfeld","wagner"],"title":"Calibration and Validation of Microscopic Traffic Flow Models","year":2003,"dataSources":["66NTXxZwo8WZwAFCa","n7xZzbjha4s6NhjNG"]}