Transfer learning for time series classification. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., & Muller, P. In 2018 IEEE International Conference on Big Data (Big Data), pages 1367–1376, December, 2018. Paper doi abstract bibtex Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
@inproceedings{ismail_fawaz_transfer_2018,
title = {Transfer learning for time series classification},
url = {https://ieeexplore.ieee.org/document/8621990},
doi = {10.1109/BigData.2018.8621990},
abstract = {Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.},
urldate = {2023-10-17},
booktitle = {2018 {IEEE} {International} {Conference} on {Big} {Data} ({Big} {Data})},
author = {Ismail Fawaz, Hassan and Forestier, Germain and Weber, Jonathan and Idoumghar, Lhassane and Muller, Pierre-Alain},
month = dec,
year = {2018},
pages = {1367--1376},
}
Downloads: 0
{"_id":"LDFcw6PHG6H9qCdhv","bibbaseid":"ismailfawaz-forestier-weber-idoumghar-muller-transferlearningfortimeseriesclassification-2018","author_short":["Ismail Fawaz, H.","Forestier, G.","Weber, J.","Idoumghar, L.","Muller, P."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Transfer learning for time series classification","url":"https://ieeexplore.ieee.org/document/8621990","doi":"10.1109/BigData.2018.8621990","abstract":"Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.","urldate":"2023-10-17","booktitle":"2018 IEEE International Conference on Big Data (Big Data)","author":[{"propositions":[],"lastnames":["Ismail","Fawaz"],"firstnames":["Hassan"],"suffixes":[]},{"propositions":[],"lastnames":["Forestier"],"firstnames":["Germain"],"suffixes":[]},{"propositions":[],"lastnames":["Weber"],"firstnames":["Jonathan"],"suffixes":[]},{"propositions":[],"lastnames":["Idoumghar"],"firstnames":["Lhassane"],"suffixes":[]},{"propositions":[],"lastnames":["Muller"],"firstnames":["Pierre-Alain"],"suffixes":[]}],"month":"December","year":"2018","pages":"1367–1376","bibtex":"@inproceedings{ismail_fawaz_transfer_2018,\n\ttitle = {Transfer learning for time series classification},\n\turl = {https://ieeexplore.ieee.org/document/8621990},\n\tdoi = {10.1109/BigData.2018.8621990},\n\tabstract = {Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.},\n\turldate = {2023-10-17},\n\tbooktitle = {2018 {IEEE} {International} {Conference} on {Big} {Data} ({Big} {Data})},\n\tauthor = {Ismail Fawaz, Hassan and Forestier, Germain and Weber, Jonathan and Idoumghar, Lhassane and Muller, Pierre-Alain},\n\tmonth = dec,\n\tyear = {2018},\n\tpages = {1367--1376},\n}\n\n\n\n\n\n\n\n\n\n\n\n","author_short":["Ismail Fawaz, H.","Forestier, G.","Weber, J.","Idoumghar, L.","Muller, P."],"key":"ismail_fawaz_transfer_2018","id":"ismail_fawaz_transfer_2018","bibbaseid":"ismailfawaz-forestier-weber-idoumghar-muller-transferlearningfortimeseriesclassification-2018","role":"author","urls":{"Paper":"https://ieeexplore.ieee.org/document/8621990"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"https://bibbase.org/zotero/mh_lenguyen","dataSources":["iwKepCrWBps7ojhDx"],"keywords":[],"search_terms":["transfer","learning","time","series","classification","ismail fawaz","forestier","weber","idoumghar","muller"],"title":"Transfer learning for time series classification","year":2018}