QRNN: Q-generalized random neural network. Stosic, D., Stosic, D., Zanchettin, C., Ludermir, T., & Stosic, B. IEEE Transactions on Neural Networks and Learning Systems, 2017. doi abstract bibtex © 2012 IEEE. Artificial neural networks (ANNs) are widely used in applications with complex decision boundaries. A large number of activation functions have been proposed in the literature to achieve better representations of the observed data. However, only a few works employ Tsallis statistics, which has successfully been applied to various other fields. This paper presents a random neural network (RNN) with q-Gaussian activation functions [ q -generalized RNN (QRNN)] based on Tsallis statistics. The proposed method employs an additional parameter q (called the entropic index) which reflects the degree of nonextensivity. This approach has the flexibility to model complex decision boundaries of different shapes by varying the entropic index. We conduct numerical experiments to analyze the efficiency of QRNN compared with RNNs and several other classical methods. Statistical tests (Wilcoxon and Friedman) are used to validate our results and show that the QRNN performs significantly better than RNNs with different activation functions. In addition, we find that QRNN outperforms many of the compared classical methods, with the exception of support vector machines, in which case it still exhibits a substantial advantage in terms of implementation simplicity and speed.
@article{
title = {QRNN: Q-generalized random neural network},
type = {article},
year = {2017},
keywords = {Activation functions,Random neural networks (RNNs),Tsallis statistics,q-Gaussian},
volume = {28},
id = {f8b4221c-acf1-305e-bfdf-e383a134975e},
created = {2019-02-14T18:02:01.633Z},
file_attached = {false},
profile_id = {74e7d4ea-3dac-3118-aab9-511a5b337e8f},
last_modified = {2019-02-14T18:02:01.633Z},
read = {false},
starred = {false},
authored = {true},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {© 2012 IEEE. Artificial neural networks (ANNs) are widely used in applications with complex decision boundaries. A large number of activation functions have been proposed in the literature to achieve better representations of the observed data. However, only a few works employ Tsallis statistics, which has successfully been applied to various other fields. This paper presents a random neural network (RNN) with q-Gaussian activation functions [ q -generalized RNN (QRNN)] based on Tsallis statistics. The proposed method employs an additional parameter q (called the entropic index) which reflects the degree of nonextensivity. This approach has the flexibility to model complex decision boundaries of different shapes by varying the entropic index. We conduct numerical experiments to analyze the efficiency of QRNN compared with RNNs and several other classical methods. Statistical tests (Wilcoxon and Friedman) are used to validate our results and show that the QRNN performs significantly better than RNNs with different activation functions. In addition, we find that QRNN outperforms many of the compared classical methods, with the exception of support vector machines, in which case it still exhibits a substantial advantage in terms of implementation simplicity and speed.},
bibtype = {article},
author = {Stosic, D. and Stosic, D. and Zanchettin, C. and Ludermir, T. and Stosic, B.},
doi = {10.1109/TNNLS.2015.2513365},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
number = {2}
}
Downloads: 0
{"_id":"PZz3m5NE9sFczE8oD","bibbaseid":"stosic-stosic-zanchettin-ludermir-stosic-qrnnqgeneralizedrandomneuralnetwork-2017","authorIDs":["95PhW7tkuv95vtHAq","PtDsdiZ3iPSFZKH6J"],"author_short":["Stosic, D.","Stosic, D.","Zanchettin, C.","Ludermir, T.","Stosic, B."],"bibdata":{"title":"QRNN: Q-generalized random neural network","type":"article","year":"2017","keywords":"Activation functions,Random neural networks (RNNs),Tsallis statistics,q-Gaussian","volume":"28","id":"f8b4221c-acf1-305e-bfdf-e383a134975e","created":"2019-02-14T18:02:01.633Z","file_attached":false,"profile_id":"74e7d4ea-3dac-3118-aab9-511a5b337e8f","last_modified":"2019-02-14T18:02:01.633Z","read":false,"starred":false,"authored":"true","confirmed":false,"hidden":false,"private_publication":false,"abstract":"© 2012 IEEE. Artificial neural networks (ANNs) are widely used in applications with complex decision boundaries. A large number of activation functions have been proposed in the literature to achieve better representations of the observed data. However, only a few works employ Tsallis statistics, which has successfully been applied to various other fields. This paper presents a random neural network (RNN) with q-Gaussian activation functions [ q -generalized RNN (QRNN)] based on Tsallis statistics. The proposed method employs an additional parameter q (called the entropic index) which reflects the degree of nonextensivity. This approach has the flexibility to model complex decision boundaries of different shapes by varying the entropic index. We conduct numerical experiments to analyze the efficiency of QRNN compared with RNNs and several other classical methods. Statistical tests (Wilcoxon and Friedman) are used to validate our results and show that the QRNN performs significantly better than RNNs with different activation functions. In addition, we find that QRNN outperforms many of the compared classical methods, with the exception of support vector machines, in which case it still exhibits a substantial advantage in terms of implementation simplicity and speed.","bibtype":"article","author":"Stosic, D. and Stosic, D. and Zanchettin, C. and Ludermir, T. and Stosic, B.","doi":"10.1109/TNNLS.2015.2513365","journal":"IEEE Transactions on Neural Networks and Learning Systems","number":"2","bibtex":"@article{\n title = {QRNN: Q-generalized random neural network},\n type = {article},\n year = {2017},\n keywords = {Activation functions,Random neural networks (RNNs),Tsallis statistics,q-Gaussian},\n volume = {28},\n id = {f8b4221c-acf1-305e-bfdf-e383a134975e},\n created = {2019-02-14T18:02:01.633Z},\n file_attached = {false},\n profile_id = {74e7d4ea-3dac-3118-aab9-511a5b337e8f},\n last_modified = {2019-02-14T18:02:01.633Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {© 2012 IEEE. Artificial neural networks (ANNs) are widely used in applications with complex decision boundaries. A large number of activation functions have been proposed in the literature to achieve better representations of the observed data. However, only a few works employ Tsallis statistics, which has successfully been applied to various other fields. This paper presents a random neural network (RNN) with q-Gaussian activation functions [ q -generalized RNN (QRNN)] based on Tsallis statistics. The proposed method employs an additional parameter q (called the entropic index) which reflects the degree of nonextensivity. This approach has the flexibility to model complex decision boundaries of different shapes by varying the entropic index. We conduct numerical experiments to analyze the efficiency of QRNN compared with RNNs and several other classical methods. Statistical tests (Wilcoxon and Friedman) are used to validate our results and show that the QRNN performs significantly better than RNNs with different activation functions. In addition, we find that QRNN outperforms many of the compared classical methods, with the exception of support vector machines, in which case it still exhibits a substantial advantage in terms of implementation simplicity and speed.},\n bibtype = {article},\n author = {Stosic, D. and Stosic, D. and Zanchettin, C. and Ludermir, T. and Stosic, B.},\n doi = {10.1109/TNNLS.2015.2513365},\n journal = {IEEE Transactions on Neural Networks and Learning Systems},\n number = {2}\n}","author_short":["Stosic, D.","Stosic, D.","Zanchettin, C.","Ludermir, T.","Stosic, B."],"biburl":"https://bibbase.org/service/mendeley/74e7d4ea-3dac-3118-aab9-511a5b337e8f","bibbaseid":"stosic-stosic-zanchettin-ludermir-stosic-qrnnqgeneralizedrandomneuralnetwork-2017","role":"author","urls":{},"keyword":["Activation functions","Random neural networks (RNNs)","Tsallis statistics","q-Gaussian"],"metadata":{"authorlinks":{"zanchettin, c":"https://bibbase.org/service/mendeley/74e7d4ea-3dac-3118-aab9-511a5b337e8f","zanchettin, c":"https://zanche.github.io/cv/"}},"downloads":0},"bibtype":"article","creationDate":"2020-09-17T14:33:46.207Z","downloads":0,"keywords":["activation functions","random neural networks (rnns)","tsallis statistics","q-gaussian"],"search_terms":["qrnn","generalized","random","neural","network","stosic","stosic","zanchettin","ludermir","stosic"],"title":"QRNN: Q-generalized random neural network","year":2017,"biburl":"https://bibbase.org/service/mendeley/74e7d4ea-3dac-3118-aab9-511a5b337e8f","dataSources":["fvRdkx56Jpp5ebtSw","XkGKCoQgZDKqXZqdh","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"]}