var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/service/mendeley/aba9653c-d139-3f95-aad8-969c487ed2f3/group/baf47cb8-5222-3492-962e-1467155db3dc?jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/service/mendeley/aba9653c-d139-3f95-aad8-969c487ed2f3/group/baf47cb8-5222-3492-962e-1467155db3dc?jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/service/mendeley/aba9653c-d139-3f95-aad8-969c487ed2f3/group/baf47cb8-5222-3492-962e-1467155db3dc?jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2024\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Joint Exploration of Kernel Functions Potential for Data Representation and Classification: A First Step Toward Interactive Interpretable Dimensionality Reduction.\n \n \n \n \n\n\n \n Aalaila, Y.; Bachchar, I.; Raki, H.; Bamansour, S.; Elhamdi, M.; Benghzial, K.; Ortega-Bustamante, M.; Guachi-Guachi, L.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n SN Computer Science, 5(1): 75. 2024.\n \n\n\n\n
\n\n\n\n \n \n \"JointWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{\n title = {Joint Exploration of Kernel Functions Potential for Data Representation and Classification: A First Step Toward Interactive Interpretable Dimensionality Reduction},\n type = {article},\n year = {2024},\n pages = {75},\n volume = {5},\n websites = {https://doi.org/10.1007/s42979-023-02405-9},\n id = {ee99a693-9a48-3f45-9b24-7b58ded51712},\n created = {2023-12-12T18:37:02.523Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2024-02-09T17:58:33.486Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n source_type = {JOUR},\n private_publication = {false},\n abstract = {Dimensionality reduction (DR) approaches are often a crucial step in data analysis tasks, particularly for data visualization purposes. DR-based techniques are essentially designed to retain the inherent structure of high-dimensional data in a lower-dimensional space, leading to reduced computational complexity and improved pattern recognition accuracy. Specifically, Kernel Principal Component Analysis (KPCA) is a widely utilized dimensionality reduction technique due to its capability to effectively handle nonlinear data sets. It offers an easily interpretable formulation from both geometric and functional analysis perspectives. However, Kernel PCA relies on free hyperparameters, which are usually tuned in advance. The relationship between these hyperparameters and the structure of the embedded space remains undisclosed. This work presents preliminary steps to explore said relationship by jointly evaluating the data classification and representation abilities. To do so, an interactive visualization framework is introduced. This study highlights the importance of creating interactive interfaces that enable interpretable dimensionality reduction approaches for data visualization and analysis.},\n bibtype = {article},\n author = {Aalaila, Yahya and Bachchar, Ismail and Raki, Hind and Bamansour, Sami and Elhamdi, Mouad and Benghzial, Kaoutar and Ortega-Bustamante, MacArthur and Guachi-Guachi, Lorena and Peluffo-Ordóñez, Diego H},\n doi = {10.1007/s42979-023-02405-9},\n journal = {SN Computer Science},\n number = {1}\n}
\n
\n\n\n
\n Dimensionality reduction (DR) approaches are often a crucial step in data analysis tasks, particularly for data visualization purposes. DR-based techniques are essentially designed to retain the inherent structure of high-dimensional data in a lower-dimensional space, leading to reduced computational complexity and improved pattern recognition accuracy. Specifically, Kernel Principal Component Analysis (KPCA) is a widely utilized dimensionality reduction technique due to its capability to effectively handle nonlinear data sets. It offers an easily interpretable formulation from both geometric and functional analysis perspectives. However, Kernel PCA relies on free hyperparameters, which are usually tuned in advance. The relationship between these hyperparameters and the structure of the embedded space remains undisclosed. This work presents preliminary steps to explore said relationship by jointly evaluating the data classification and representation abilities. To do so, an interactive visualization framework is introduced. This study highlights the importance of creating interactive interfaces that enable interpretable dimensionality reduction approaches for data visualization and analysis.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2023\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n GHG Global Emission Prediction of Synthetic N Fertilizers Using Expectile Regression Techniques.\n \n \n \n \n\n\n \n Benghzial, K.; Raki, H.; Bamansour, S.; Elhamdi, M.; Aalaila, Y.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Atmosphere, 14(2). 2023.\n \n\n\n\n
\n\n\n\n \n \n \"GHGWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{\n title = {GHG Global Emission Prediction of Synthetic N Fertilizers Using Expectile Regression Techniques},\n type = {article},\n year = {2023},\n volume = {14},\n websites = {https://www.mdpi.com/2073-4433/14/2/283},\n id = {8b765cb8-cf9c-3b36-b33d-6f33ff1d70e8},\n created = {2023-01-31T21:29:17.156Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2023-01-31T21:29:17.156Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {atmos14020283},\n source_type = {article},\n private_publication = {false},\n abstract = {Agriculture accounts for a large percentage of nitrous oxide (N2O) emissions, mainly due to the misapplication of nitrogen-based fertilizers, leading to an increase in the greenhouse gas (GHG) footprint. These emissions are of a direct nature, released straight into the atmosphere through nitrification and denitrification, or of an indirect nature, mainly through nitrate leaching, runoff, and N2O volatilization processes. N2O emissions are largely ascribed to the agricultural sector, which represents a threat to sustainability and food production, subsequent to the radical contribution to climate change. In this connection, it is crucial to unveil the relationship between synthetic N fertilizer global use and N2O emissions. To this end, we worked on a dataset drawn from a recent study, which estimates direct and indirect N2O emissions according to each country, by the Intergovernmental Panel on Climate Change (IPCC) guidelines. Machine learning tools are considered great explainable techniques when dealing with air quality problems. Hence, our work focuses on expectile regression (ER) based-approaches to predict N2O emissions based on N fertilizer use. In contrast to classical linear regression (LR), this method allows for heteroscedasticity and omits a parametric specification of the underlying distribution. ER provides a complete picture of the target variable&rsquo;s distribution, especially when the tails are of interest, or in dealing with heavy-tailed distributions. In this work, we applied expectile regression and the kernel expectile regression estimator (KERE) to predict direct and indirect N2O emissions. The results outline both the flexibility and competitiveness of ER-based techniques in regard to the state-of-the-art regression approaches.},\n bibtype = {article},\n author = {Benghzial, Kaoutar and Raki, Hind and Bamansour, Sami and Elhamdi, Mouad and Aalaila, Yahya and Peluffo-Ordóñez, Diego H},\n doi = {10.3390/atmos14020283},\n journal = {Atmosphere},\n number = {2}\n}
\n
\n\n\n
\n Agriculture accounts for a large percentage of nitrous oxide (N2O) emissions, mainly due to the misapplication of nitrogen-based fertilizers, leading to an increase in the greenhouse gas (GHG) footprint. These emissions are of a direct nature, released straight into the atmosphere through nitrification and denitrification, or of an indirect nature, mainly through nitrate leaching, runoff, and N2O volatilization processes. N2O emissions are largely ascribed to the agricultural sector, which represents a threat to sustainability and food production, subsequent to the radical contribution to climate change. In this connection, it is crucial to unveil the relationship between synthetic N fertilizer global use and N2O emissions. To this end, we worked on a dataset drawn from a recent study, which estimates direct and indirect N2O emissions according to each country, by the Intergovernmental Panel on Climate Change (IPCC) guidelines. Machine learning tools are considered great explainable techniques when dealing with air quality problems. Hence, our work focuses on expectile regression (ER) based-approaches to predict N2O emissions based on N fertilizer use. In contrast to classical linear regression (LR), this method allows for heteroscedasticity and omits a parametric specification of the underlying distribution. ER provides a complete picture of the target variable’s distribution, especially when the tails are of interest, or in dealing with heavy-tailed distributions. In this work, we applied expectile regression and the kernel expectile regression estimator (KERE) to predict direct and indirect N2O emissions. The results outline both the flexibility and competitiveness of ER-based techniques in regard to the state-of-the-art regression approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interactive Information Visualization Models: A Systematic Literature Review.\n \n \n \n \n\n\n \n Ortega-Bustamante, M.; Hasperué, W.; Peluffo-Ordóñez, D., H.; Imbaquingo, D.; Raki, H.; Aalaila, Y.; Elhamdi, M.; and Guachi-Guachi, L.\n\n\n \n\n\n\n In Gervasi, O.; Murgante, B.; Taniar, D.; Apduhan, B., O.; Braga, A., C.; Garau, C.; and Stratigea, A., editor(s), Computational Science and Its Applications -- ICCSA 2023, pages 661-676, 2023. Springer Nature Switzerland\n \n\n\n\n
\n\n\n\n \n \n \"InteractiveWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Interactive Information Visualization Models: A Systematic Literature Review},\n type = {inproceedings},\n year = {2023},\n pages = {661-676},\n websites = {https://link.springer.com/chapter/10.1007/978-3-031-36805-9_43},\n publisher = {Springer Nature Switzerland},\n city = {Cham},\n id = {dfce1dba-bcb1-3918-a048-dff77d379fd5},\n created = {2023-07-05T03:17:26.325Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2023-07-05T03:17:26.325Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {10.1007/978-3-031-36805-9_43},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {Interactive information visualization models aim to make dimensionality reduction (DR) accessible to non-expert users through interactive visualization frameworks. This systematic literature review explores the role of DR and information visualization (IV) techniques in interactive models (IM). We search relevant bibliographic databases, including IEEE Xplore, Springer Link, and Web of Science, for publications from the last five years. We identify 1448 scientific articles, which we then narrow down to 52 after screening and selection. This study addresses three research questions, revealing that the number of articles focused on interactive DR-oriented models has been in the minority in the last five years. However, related topics such as IV techniques or RD methods have increased. Trends are identified in the development of interactive models, as well as in IV techniques and RD methods. For example, researchers are increasingly proposing new DR methods or modifying existing ones rather than relying solely on established techniques. Furthermore, scatter plots have emerged as the predominant option for IV in interactive models, with limited options for customizing the display of raw data and details in application windows. Overall, this review provides insights into the current state of interactive IV models for DR and highlights areas for further research.},\n bibtype = {inproceedings},\n author = {Ortega-Bustamante, MacArthur and Hasperué, Waldo and Peluffo-Ordóñez, Diego H and Imbaquingo, Daisy and Raki, Hind and Aalaila, Yahya and Elhamdi, Mouad and Guachi-Guachi, Lorena},\n editor = {Gervasi, Osvaldo and Murgante, Beniamino and Taniar, David and Apduhan, Bernady O and Braga, Ana Cristina and Garau, Chiara and Stratigea, Anastasia},\n booktitle = {Computational Science and Its Applications -- ICCSA 2023}\n}
\n
\n\n\n
\n Interactive information visualization models aim to make dimensionality reduction (DR) accessible to non-expert users through interactive visualization frameworks. This systematic literature review explores the role of DR and information visualization (IV) techniques in interactive models (IM). We search relevant bibliographic databases, including IEEE Xplore, Springer Link, and Web of Science, for publications from the last five years. We identify 1448 scientific articles, which we then narrow down to 52 after screening and selection. This study addresses three research questions, revealing that the number of articles focused on interactive DR-oriented models has been in the minority in the last five years. However, related topics such as IV techniques or RD methods have increased. Trends are identified in the development of interactive models, as well as in IV techniques and RD methods. For example, researchers are increasingly proposing new DR methods or modifying existing ones rather than relying solely on established techniques. Furthermore, scatter plots have emerged as the predominant option for IV in interactive models, with limited options for customizing the display of raw data and details in application windows. Overall, this review provides insights into the current state of interactive IV models for DR and highlights areas for further research.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Linear-Model-Based Methodology for Predicting the Directional Movement of the Euro-Dollar Exchange Rate.\n \n \n \n \n\n\n \n Argotty-Erazo, M.; Blázquez-Zaballos, A.; Argoty-Eraso, C., A.; Lorente-Leyva, L., L.; Sánchez-Pozo, N., N.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n IEEE Access, 11: 67249-67284. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"AWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{\n title = {A Novel Linear-Model-Based Methodology for Predicting the Directional Movement of the Euro-Dollar Exchange Rate},\n type = {article},\n year = {2023},\n pages = {67249-67284},\n volume = {11},\n websites = {https://ieeexplore.ieee.org/document/10147811},\n id = {b19892bd-9c31-3780-8289-831ba423d5ce},\n created = {2023-07-20T23:39:03.687Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2023-07-20T23:39:03.687Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {10147811},\n source_type = {article},\n private_publication = {false},\n bibtype = {article},\n author = {Argotty-Erazo, Mauricio and Blázquez-Zaballos, Antonio and Argoty-Eraso, Carlos A and Lorente-Leyva, Leandro L and Sánchez-Pozo, Nadia N and Peluffo-Ordóñez, Diego H},\n doi = {10.1109/ACCESS.2023.3285082},\n journal = {IEEE Access}\n}
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of Oversampling Techniques and Machine Learning Models on Unbalanced Spirometry Data.\n \n \n \n \n\n\n \n Izurieta, R., C.; Sánchez-Pozo, N., N.; Mejía-Ordóñez, J., S.; González-Vergara, J.; Sierra, L., M.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n In Rocha, Á.; Ferrás, C.; and Ibarra, W., editor(s), Information Technology and Systems, pages 497-506, 2023. Springer International Publishing\n \n\n\n\n
\n\n\n\n \n \n \"AnalysisWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Analysis of Oversampling Techniques and Machine Learning Models on Unbalanced Spirometry Data},\n type = {inproceedings},\n year = {2023},\n pages = {497-506},\n websites = {https://rd.springer.com/chapter/10.1007/978-3-031-33261-6_42},\n publisher = {Springer International Publishing},\n city = {Cham},\n id = {3faf1a21-3d12-3ba3-a492-784628752264},\n created = {2023-08-19T20:53:43.102Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2023-08-19T20:53:43.102Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {10.1007/978-3-031-33261-6_42},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {The use of artificial intelligence in the quest to contribute to human longevity is becoming increasingly common in medical settings, one of these being spirometry. Given the different factors that can deteriorate the pulmonary status, several works aim to establish ways to alert future patients of their potential pulmonary complications. Thus, we carry out a lung age prediction task from spirometry data extracted using a previously developed mobile application. Regarding the imbalanced classes, SMOTE, ADASYN, and Random Oversampling algorithms were compared with different classifier models. The SMOTE and Quadratic Discriminant Analysis combination achieves a 99.12\\% accuracy, 99.09\\% specificity, and 99.91\\% sensitivity. Additionally, we performed an exploratory analysis of deep learning models, demonstrating that multilayer perceptron models, along with feature fusion techniques, achieve higher performances than classical models such as K-Nearest Neighbors or Decision Trees.},\n bibtype = {inproceedings},\n author = {Izurieta, Roberto Castro and Sánchez-Pozo, Nadia N and Mejía-Ordóñez, Juan S and González-Vergara, Juan and Sierra, Luz Marina and Peluffo-Ordóñez, Diego H},\n editor = {Rocha, Álvaro and Ferrás, Carlos and Ibarra, Waldo},\n booktitle = {Information Technology and Systems}\n}
\n
\n\n\n
\n The use of artificial intelligence in the quest to contribute to human longevity is becoming increasingly common in medical settings, one of these being spirometry. Given the different factors that can deteriorate the pulmonary status, several works aim to establish ways to alert future patients of their potential pulmonary complications. Thus, we carry out a lung age prediction task from spirometry data extracted using a previously developed mobile application. Regarding the imbalanced classes, SMOTE, ADASYN, and Random Oversampling algorithms were compared with different classifier models. The SMOTE and Quadratic Discriminant Analysis combination achieves a 99.12\\% accuracy, 99.09\\% specificity, and 99.91\\% sensitivity. Additionally, we performed an exploratory analysis of deep learning models, demonstrating that multilayer perceptron models, along with feature fusion techniques, achieve higher performances than classical models such as K-Nearest Neighbors or Decision Trees.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A comparative study of Machine Learning-based classification of Tomato fungal diseases: Application of GLCM texture features.\n \n \n \n \n\n\n \n Nyasulu, C.; Diattara, A.; Traore, A.; Ba, C.; Diedhiou, P., M.; Sy, Y.; Raki, H.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Heliyon. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"AWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{\n title = {A comparative study of Machine Learning-based classification of Tomato fungal diseases: Application of GLCM texture features},\n type = {article},\n year = {2023},\n websites = {https://www.sciencedirect.com/science/article/pii/S2405844023089053},\n publisher = {Elsevier},\n id = {43094e1e-bcec-326e-99ee-4ac74b8c9b9f},\n created = {2023-11-11T22:15:24.285Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2023-11-11T22:15:24.285Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {nyasulu2023comparative},\n source_type = {article},\n private_publication = {false},\n abstract = {Globally, agriculture remains an important source of food and economic development. Due to various plant diseases, farmers continue to suffer huge yield losses in both quality and quantity. In this study, we explored the potential of using Artificial Neural Networks, K-Nearest Neighbors, Random Forest, and Support Vector Machine to classify tomato fungal leaf diseases: Alternaria, Curvularia, Helminthosporium, and Lasiodiplodi based on Gray Level Co-occurrence Matrix texture features. Small differences between symptoms of these diseases make it difficult to use the naked eye to obtain better results in detecting and distinguishing these diseases. The Artificial Neural Network outperformed other classifiers with an overall accuracy of 94% and average scores of 93.6% for Precision, 93.8% for Recall, and 93.8% for F1-score. Generally, the models confused samples originally belonging to Helminthosporium with Curvularia. The extracted texture features show great potential to classify the different tomato leaf fungal diseases. The results of this study show that texture characteristics of the Gray Level Co-occurrence Matrix play a critical role in the establishment of tomato leaf disease classification systems and can facilitate the implementation of preventive measures by farmers, resulting in enhanced yield quality and quantity.},\n bibtype = {article},\n author = {Nyasulu, Chimango and Diattara, Awa and Traore, Assitan and Ba, Cheikh and Diedhiou, Papa Madiallacké and Sy, Yakhya and Raki, Hind and Peluffo-Ordóñez, Diego Hernán},\n journal = {Heliyon}\n}
\n
\n\n\n
\n Globally, agriculture remains an important source of food and economic development. Due to various plant diseases, farmers continue to suffer huge yield losses in both quality and quantity. In this study, we explored the potential of using Artificial Neural Networks, K-Nearest Neighbors, Random Forest, and Support Vector Machine to classify tomato fungal leaf diseases: Alternaria, Curvularia, Helminthosporium, and Lasiodiplodi based on Gray Level Co-occurrence Matrix texture features. Small differences between symptoms of these diseases make it difficult to use the naked eye to obtain better results in detecting and distinguishing these diseases. The Artificial Neural Network outperformed other classifiers with an overall accuracy of 94% and average scores of 93.6% for Precision, 93.8% for Recall, and 93.8% for F1-score. Generally, the models confused samples originally belonging to Helminthosporium with Curvularia. The extracted texture features show great potential to classify the different tomato leaf fungal diseases. The results of this study show that texture characteristics of the Gray Level Co-occurrence Matrix play a critical role in the establishment of tomato leaf disease classification systems and can facilitate the implementation of preventive measures by farmers, resulting in enhanced yield quality and quantity.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Trilateration-based Indoor Location using Supervised Learning Algorithms.\n \n \n \n \n\n\n \n Landívar, J.; Ormaza, C.; Asanza, V.; Ojeda, V.; Avilés, J., C.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n In 2022 International Conference on Applied Electronics (AE), pages 1-6, 2022. \n \n\n\n\n
\n\n\n\n \n \n \"Trilateration-basedWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Trilateration-based Indoor Location using Supervised Learning Algorithms},\n type = {inproceedings},\n year = {2022},\n pages = {1-6},\n websites = {https://ieeexplore.ieee.org/document/9920073},\n id = {db9bade2-be77-32da-b109-a8127bd0f760},\n created = {2022-12-04T19:20:26.547Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-12-04T19:20:26.547Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {9920073},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {The indoor positioning system (IPS) has a wide range of applications, due to the advantages it has over Global Positioning Systems (GPS) in indoor environments. Due to the biosecurity measures established by the World Health Organization (WHO), where the social distancing is provided, being stricter in indoor environments. This work proposes the design of a positioning system based on trilateration. The main objective is to predict the positioning in both the ‘x’ and ‘y’ axis in an area of 8 square meters. For this purpose, 3 Access Points (AP) and a Mobile Device (DM), which works as a raster, have been used. The Received Signal Strength Indication (RSSI) values measured at each AP are the variables used in regression algorithms that predict the x and y position. In this work, 24 regression algorithms have been evaluated, of which the lowest errors obtained are 70.322 [cm] and 30.1508 [cm], for the x and y axes, respectively.},\n bibtype = {inproceedings},\n author = {Landívar, Jerry and Ormaza, Carolina and Asanza, Víctor and Ojeda, Verónica and Avilés, Juan Carlos and Peluffo-Ordóñez, Diego H},\n doi = {10.1109/AE54730.2022.9920073},\n booktitle = {2022 International Conference on Applied Electronics (AE)}\n}
\n
\n\n\n
\n The indoor positioning system (IPS) has a wide range of applications, due to the advantages it has over Global Positioning Systems (GPS) in indoor environments. Due to the biosecurity measures established by the World Health Organization (WHO), where the social distancing is provided, being stricter in indoor environments. This work proposes the design of a positioning system based on trilateration. The main objective is to predict the positioning in both the ‘x’ and ‘y’ axis in an area of 8 square meters. For this purpose, 3 Access Points (AP) and a Mobile Device (DM), which works as a raster, have been used. The Received Signal Strength Indication (RSSI) values measured at each AP are the variables used in regression algorithms that predict the x and y position. In this work, 24 regression algorithms have been evaluated, of which the lowest errors obtained are 70.322 [cm] and 30.1508 [cm], for the x and y axes, respectively.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Overview on kernels for least-squares support-vector-macihine-based clustering: explaining kernel expectral clustering.\n \n \n \n \n\n\n \n Fernández, Y.; Marrufo, I.; Paez, M., A.; Umaquinga-Criollo, A., C.; Rosero, P., D.; and Peluffo-Ordóñez, H., D.\n\n\n \n\n\n\n REVISTA INVESTIGACION OPERACIONAL. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"OverviewWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 9 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Overview on kernels for least-squares support-vector-macihine-based clustering: explaining kernel expectral clustering.},\n type = {article},\n year = {2021},\n keywords = {clustering,kernel principal component,kernel spectral clustering ksc,support vector machine,svm},\n websites = {https://rev-inv-ope.pantheonsorbonne.fr/sites/default/files/inline-files/42121-10.pdf},\n id = {4e380704-6b8b-3000-b589-8736af4483ce},\n created = {2022-02-02T02:35:12.935Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:12.935Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Fernandez2021},\n private_publication = {false},\n abstract = {This letter presents an overview on some remarkable basics on kernels as well as the formulation of a clustering approach based on least-squares support vector machines. Specifically, the method known as kernel spectral clustering (KSC) is of interest. We explore the links between KSC and a weighted version of kernel principal component analysis (WKPCA). Also, we study the solution of the KSC problem by means of a primal-dual scheme. All mathematical developments are carried out following an entirely matrix formulation. As a result, in addition to the elegant KSC formulation, important insights and hints about the use and design of kernel-based approaches for clustering are provided.},\n bibtype = {article},\n author = {Fernández, Y. and Marrufo, I. and Paez, M. A. and Umaquinga-Criollo, A. C. and Rosero, P. D. and Peluffo-Ordóñez, H. D.},\n journal = {REVISTA INVESTIGACION OPERACIONAL}\n}
\n
\n\n\n
\n This letter presents an overview on some remarkable basics on kernels as well as the formulation of a clustering approach based on least-squares support vector machines. Specifically, the method known as kernel spectral clustering (KSC) is of interest. We explore the links between KSC and a weighted version of kernel principal component analysis (WKPCA). Also, we study the solution of the KSC problem by means of a primal-dual scheme. All mathematical developments are carried out following an entirely matrix formulation. As a result, in addition to the elegant KSC formulation, important insights and hints about the use and design of kernel-based approaches for clustering are provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Brief Review on Instance Selection Based on Condensed Nearest Neighbors for Data Classification Tasks.\n \n \n \n \n\n\n \n Fernández-Fernández, Y.; Peluffo-Ordóñez, D., H.; Umaquinga-Criollo, A., C.; Lorente-Leyva, L., L.; and Cabrera-Alvarez, E., N.\n\n\n \n\n\n\n In Bindhu, V.; Tavares, J., M., R., S.; Boulogeorgos, A., A.; and Vuppalapati, C., editor(s), International Conference on Communication, Computing and Electronics Systems, pages 313-324, 2021. Springer Singapore\n \n\n\n\n
\n\n\n\n \n \n \"AWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {A Brief Review on Instance Selection Based on Condensed Nearest Neighbors for Data Classification Tasks},\n type = {inproceedings},\n year = {2021},\n pages = {313-324},\n websites = {https://link.springer.com/chapter/10.1007/978-981-33-4909-4_23},\n publisher = {Springer Singapore},\n city = {Singapore},\n id = {5ca80610-7a1a-37c7-894f-0c4e678a2669},\n created = {2022-02-02T02:35:13.251Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:13.251Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {10.1007/978-981-33-4909-4_23},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {The condensed nearest neighbor (CNN) classifier is one of the techniques used and known to perform recognition tasks. It has also proven to be one of the most interesting algorithms in the field of data mining despite its simplicity. However, CNN suffers from several drawbacks, such as high storage requirements and low noise tolerance. One of the characteristics of CNN is that it focuses on the selection of prototypes, which consists of reducing the set of training data. One of the goals of CNN seeks to achieve the reduction of information in such a way that the reduced information can represent large amounts of data to exercise decision-making on them. This paper mentions some of the most recent contributions to CNN-based unsupervised algorithms in a review that builds on the mathematical principles of condensed methods.},\n bibtype = {inproceedings},\n author = {Fernández-Fernández, Yasmany and Peluffo-Ordóñez, Diego H and Umaquinga-Criollo, Ana C and Lorente-Leyva, Leandro L and Cabrera-Alvarez, Elia N},\n editor = {Bindhu, V and Tavares, João Manuel R S and Boulogeorgos, Alexandros-Apostolos A and Vuppalapati, Chandrasekar},\n booktitle = {International Conference on Communication, Computing and Electronics Systems}\n}
\n
\n\n\n
\n The condensed nearest neighbor (CNN) classifier is one of the techniques used and known to perform recognition tasks. It has also proven to be one of the most interesting algorithms in the field of data mining despite its simplicity. However, CNN suffers from several drawbacks, such as high storage requirements and low noise tolerance. One of the characteristics of CNN is that it focuses on the selection of prototypes, which consists of reducing the set of training data. One of the goals of CNN seeks to achieve the reduction of information in such a way that the reduced information can represent large amounts of data to exercise decision-making on them. This paper mentions some of the most recent contributions to CNN-based unsupervised algorithms in a review that builds on the mathematical principles of condensed methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Integrating Information Visualization and Dimensionality Reduction: A pathway to Bridge the Gap between Natural and Artificial Intelligence.\n \n \n \n \n\n\n \n Peluffo-ordóñez, D., H.\n\n\n \n\n\n\n TecnoLógicas, 24. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"IntegratingPaper\n  \n \n \n \"IntegratingWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Integrating Information Visualization and Dimensionality Reduction: A pathway to Bridge the Gap between Natural and Artificial Intelligence},\n type = {article},\n year = {2021},\n keywords = {Dimensionality Reduction,Information Visualization},\n volume = {24},\n websites = {https://revistas.itm.edu.co/index.php/tecnologicas/article/view/2108},\n id = {dead0a6b-740d-31f4-b80b-6cd7d5f05041},\n created = {2022-02-02T02:35:13.545Z},\n file_attached = {true},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:21.559Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {By importing some natural abilities from human thinking into the design of computerized decision support systems, a cross-cutting trend of intelligent systems has emerged, namely, the synergetic integration between natural and artificial intelligence [1]. While natural intelligence provides creative, parallel, and holistic thinking, its artificial counterpart is logical, accurate, able to perform complex and extensive calculations, and tireless. In the light of such integration, two concepts are important: controllability and interpretability. The former is defined as the ability of computerized systems to receive feedback and follow users’ instructions, while the latter refers to human-machine communication. A suitable alternative to simultaneously involve these two concepts—and then bridging the gap between natural and artificial intelligence—is bringing together the fields of dimensionality reduction (DimRed) and information visualization (InfoVis).},\n bibtype = {article},\n author = {Peluffo-ordóñez, Diego H},\n journal = {TecnoLógicas}\n}
\n
\n\n\n
\n By importing some natural abilities from human thinking into the design of computerized decision support systems, a cross-cutting trend of intelligent systems has emerged, namely, the synergetic integration between natural and artificial intelligence [1]. While natural intelligence provides creative, parallel, and holistic thinking, its artificial counterpart is logical, accurate, able to perform complex and extensive calculations, and tireless. In the light of such integration, two concepts are important: controllability and interpretability. The former is defined as the ability of computerized systems to receive feedback and follow users’ instructions, while the latter refers to human-machine communication. A suitable alternative to simultaneously involve these two concepts—and then bridging the gap between natural and artificial intelligence—is bringing together the fields of dimensionality reduction (DimRed) and information visualization (InfoVis).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Information Quality Assessment in Fusion Systems.\n \n \n \n \n\n\n \n Becerra, M.A.; Tobón, C.; Castro-Ospina, A.E.; Peluffo-Ordóñez, D.\n\n\n \n\n\n\n DATA,1-30. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"InformationPaper\n  \n \n \n \"InformationWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Information Quality Assessment in Fusion Systems},\n type = {article},\n year = {2021},\n keywords = {completeness,figure 1 illustrates a,information quality,it shows the components,multi-source fusion,operational fusion process,relevance,reliability,simplified model of an,system,uncertainty,which},\n pages = {1-30},\n websites = {https://www.mdpi.com/2306-5729/6/6/60},\n id = {29dab874-e5ca-397a-8765-f5ddfa42c2a1},\n created = {2022-02-02T02:35:13.827Z},\n file_attached = {true},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:21.856Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {This paper provides a comprehensive description of the current literature on data fusion, with an emphasis on Information Quality (IQ) and performance evaluation. This literature review highlights recent studies that reveal existing gaps, the need to find a synergy between data fusion and IQ, several research issues, and the challenges and pitfalls in this field. First, the main models, frameworks, architectures, algorithms, solutions, problems, and requirements are analyzed. Second, a general data fusion engineering process is presented to show how complex it is to design a framework for a specific application. Third, an IQ approach, as well as the different methodologies and frameworks used to assess IQ in information systems are addressed; in addition, data fusion systems are presented along with their related criteria. Furthermore, information on the context in data fusion systems and its IQ assessment are discussed. Subsequently, the issue of data fusion systems’ performance is reviewed. Finally, some key aspects and concluding remarks are outlined, and some future lines of work are gathered.},\n bibtype = {article},\n author = {Becerra, M.A.; Tobón, C.; Castro-Ospina, A.E.; Peluffo-Ordóñez, D.H.},\n journal = {DATA}\n}
\n
\n\n\n
\n This paper provides a comprehensive description of the current literature on data fusion, with an emphasis on Information Quality (IQ) and performance evaluation. This literature review highlights recent studies that reveal existing gaps, the need to find a synergy between data fusion and IQ, several research issues, and the challenges and pitfalls in this field. First, the main models, frameworks, architectures, algorithms, solutions, problems, and requirements are analyzed. Second, a general data fusion engineering process is presented to show how complex it is to design a framework for a specific application. Third, an IQ approach, as well as the different methodologies and frameworks used to assess IQ in information systems are addressed; in addition, data fusion systems are presented along with their related criteria. Furthermore, information on the context in data fusion systems and its IQ assessment are discussed. Subsequently, the issue of data fusion systems’ performance is reviewed. Finally, some key aspects and concluding remarks are outlined, and some future lines of work are gathered.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Information fusion and information quality assessment for environmental forecasting.\n \n \n \n \n\n\n \n Becerra, M., A.; Uribe, Y.; Peluffo-Ordóñez, D., H.; Karla, Á.; and Tobón, C.\n\n\n \n\n\n\n Urban Climate, 39(August). 2021.\n \n\n\n\n
\n\n\n\n \n \n \"InformationWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{\n title = {Information fusion and information quality assessment for environmental forecasting},\n type = {article},\n year = {2021},\n volume = {39},\n websites = {https://www.sciencedirect.com/science/article/pii/S2212095521001905},\n id = {65d3a99a-fbc0-3327-98b4-620d5a23fa8a},\n created = {2022-02-02T02:35:14.129Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:14.129Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Air pollution is a major environmental threat to human health. Therefore, multiple systems have been developed for early prediction of air pollution levels in large cities. However, deterministic models produce uncertainties due to the complexity of the physical and chemical processes of individual systems and transport. In turn, statistical and machine learning techniques require a large amount of historical data to predict the behavior of a variable. In this paper, we propose a data fusion model to spatially and temporally predict air quality and assess its situation and risk for public health. Our model is based on the Joint Directors of Laboratories (JDL) model and focused on Information Quality (IQ), which allows us to fine tune hyper-parameters in different processes and trace information from raw data to knowledge. Expert systems use the information assessment to select and process data, information, and knowledge. The functionality of our model is tested using an environmental database of the Air Quality Monitoring Network of Área Metropolitana del Valle de Aburrá (AMVA in Spanish) in Colombia. Different levels of noise are added to the data to analyze the effects of information quality on the systems' performance throughout the process. Finally, our system is compared with two conventional machine learning-based models: Deep Learning and Support Vector Regression (SVR). The results show that our proposed model exhibits better performance, in terms of air quality forecasting, than conventional models. Furthermore, its capability as a mechanism to support decision making is clearly demonstrated.},\n bibtype = {article},\n author = {Becerra, M. A. and Uribe, Y. and Peluffo-Ordóñez, D. H. and Karla, Álvarez-Uribe and Tobón, C.},\n doi = {10.1016/j.uclim.2021.100960},\n journal = {Urban Climate},\n number = {August}\n}
\n
\n\n\n
\n Air pollution is a major environmental threat to human health. Therefore, multiple systems have been developed for early prediction of air pollution levels in large cities. However, deterministic models produce uncertainties due to the complexity of the physical and chemical processes of individual systems and transport. In turn, statistical and machine learning techniques require a large amount of historical data to predict the behavior of a variable. In this paper, we propose a data fusion model to spatially and temporally predict air quality and assess its situation and risk for public health. Our model is based on the Joint Directors of Laboratories (JDL) model and focused on Information Quality (IQ), which allows us to fine tune hyper-parameters in different processes and trace information from raw data to knowledge. Expert systems use the information assessment to select and process data, information, and knowledge. The functionality of our model is tested using an environmental database of the Air Quality Monitoring Network of Área Metropolitana del Valle de Aburrá (AMVA in Spanish) in Colombia. Different levels of noise are added to the data to analyze the effects of information quality on the systems' performance throughout the process. Finally, our system is compared with two conventional machine learning-based models: Deep Learning and Support Vector Regression (SVR). The results show that our proposed model exhibits better performance, in terms of air quality forecasting, than conventional models. Furthermore, its capability as a mechanism to support decision making is clearly demonstrated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalized Spectral Dimensionality Reduction Based on Kernel Representations and Principal Component Analysis.\n \n \n \n \n\n\n \n Ortega-Bustamante, M., C.; Hasperué, W.; Peluffo-Ordóñez, D., H.; González-Vergara, J.; Marín-Gaviño, J.; and Velez-Falconi, M.\n\n\n \n\n\n\n In Gervasi, O.; Murgante, B.; Misra, S.; Garau, C.; Blečić, I.; Taniar, D.; Apduhan, B., O.; Rocha, A., M., A., C.; Tarantino, E.; and Torre, C., M., editor(s), Computational Science and Its Applications -- ICCSA 2021, pages 512-523, 2021. Springer International Publishing\n \n\n\n\n
\n\n\n\n \n \n \"GeneralizedWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Generalized Spectral Dimensionality Reduction Based on Kernel Representations and Principal Component Analysis},\n type = {inproceedings},\n year = {2021},\n pages = {512-523},\n websites = {https://link.springer.com/chapter/10.1007%2F978-3-030-86973-1_36},\n publisher = {Springer International Publishing},\n city = {Cham},\n id = {41ad69e6-dc76-30b6-a88e-b53fe71f95f7},\n created = {2022-02-02T02:35:14.451Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:14.451Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {10.1007/978-3-030-86973-1_36},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {Very often, multivariate data analysis problems require dimensionality reduction (DR) stages to either improve analysis performance or represent the data in an intelligible fashion. Traditionally DR techniques are developed under different frameworks and settings what makes their comparison a non-trivial task. In this sense, generalized DR approaches are of great interest as they enable both to power and compare the DR techniques in a proper and fair manner. This work introduces a generalized spectral dimensionality reduction (GSDR) approach able to represent DR spectral techniques and enhance their representation ability. To do so, GSDR exploits the use of kernel-based representations as an initial nonlinear transformation to obtain a new space. Then, such a new space is used as an input for a feature extraction process based on principal component analysis. As remarkable experimental results, GSDR shows to be able to outperform the conventional implementation of well-known spectral DR techniques (namely, classical multidimensional scaling and Laplacian eigenmaps) in terms of the scaled version of the average agreement rate. Additionally, relevant insights and theoretical developments to understand the effect of data structure preservation at local and global levels are provided.},\n bibtype = {inproceedings},\n author = {Ortega-Bustamante, MacArthur C and Hasperué, Waldo and Peluffo-Ordóñez, Diego H and González-Vergara, Juan and Marín-Gaviño, Josué and Velez-Falconi, Martín},\n editor = {Gervasi, Osvaldo and Murgante, Beniamino and Misra, Sanjay and Garau, Chiara and Blečić, Ivan and Taniar, David and Apduhan, Bernady O and Rocha, Ana Maria A C and Tarantino, Eufemia and Torre, Carmelo Maria},\n booktitle = {Computational Science and Its Applications -- ICCSA 2021}\n}
\n
\n\n\n
\n Very often, multivariate data analysis problems require dimensionality reduction (DR) stages to either improve analysis performance or represent the data in an intelligible fashion. Traditionally DR techniques are developed under different frameworks and settings what makes their comparison a non-trivial task. In this sense, generalized DR approaches are of great interest as they enable both to power and compare the DR techniques in a proper and fair manner. This work introduces a generalized spectral dimensionality reduction (GSDR) approach able to represent DR spectral techniques and enhance their representation ability. To do so, GSDR exploits the use of kernel-based representations as an initial nonlinear transformation to obtain a new space. Then, such a new space is used as an input for a feature extraction process based on principal component analysis. As remarkable experimental results, GSDR shows to be able to outperform the conventional implementation of well-known spectral DR techniques (namely, classical multidimensional scaling and Laplacian eigenmaps) in terms of the scaled version of the average agreement rate. Additionally, relevant insights and theoretical developments to understand the effect of data structure preservation at local and global levels are provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Algorithms Air Quality Estimation: A Comparative Study of Stochastic and Heuristic Predictive Models.\n \n \n \n \n\n\n \n Sánchez-Pozo, N., N.; Trilles-Oliver, S.; Solé-Ribalta, A.; Lorente-Leyva, L., L.; Mayorca-Torres, D.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n In Sanjurjo González, H.; Pastor López, I.; García Bringas, P.; Quintián, H.; and Corchado, E., editor(s), Hybrid Artificial Intelligent Systems, pages 293-304, 2021. Springer International Publishing\n \n\n\n\n
\n\n\n\n \n \n \"AlgorithmsWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Algorithms Air Quality Estimation: A Comparative Study of Stochastic and Heuristic Predictive Models},\n type = {inproceedings},\n year = {2021},\n pages = {293-304},\n websites = {https://link.springer.com/chapter/10.1007/978-3-030-86271-8_25},\n publisher = {Springer International Publishing},\n city = {Cham},\n id = {6509f597-6f6e-378a-b3c3-3fbda9e519ff},\n created = {2022-02-02T02:35:14.742Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:14.742Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {10.1007/978-3-030-86271-8_25},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {This paper presents a comparative analysis of predictive models applied to air quality estimation. Currently, among other global issues, there is a high concern about air pollution, for this reason, there are several air quality indicators, with carbon monoxide (CO), sulfur dioxide (SO2), nitrogen dioxide (NO2) and ozone (O3) being the main ones. When the concentration level of an indicator exceeds an established air quality safety threshold, it is considered harmful to human health, therefore, in cities like London, there are monitoring systems for air pollutants. This study aims to compare the efficiency of stochastic and heuristic predictive models for forecasting ozone (O3) concentration to estimate London's air quality by analyzing an open dataset retrieved from the London Datastore portal. Models based on data analysis have been widely used in air quality forecasting. This paper develops four predictive models (autoregressive integrated moving average - ARIMA, support vector regression - SVR, neural networks (specifically, long-short term memory - LSTM) and Facebook Prophet). Experimentally, ARIMA models and LSTM are proved to reach the highest accuracy in predicting the concentration of air pollutants among the considered models. As a result, the comparative analysis of the loss function (root-mean-square error) reveled that ARIMA and LSTM are the most suitable, accomplishing a low error rate of 0.18 and 0.20, respectively.},\n bibtype = {inproceedings},\n author = {Sánchez-Pozo, Nadia N and Trilles-Oliver, Sergi and Solé-Ribalta, Albert and Lorente-Leyva, Leandro L and Mayorca-Torres, Dagoberto and Peluffo-Ordóñez, Diego H},\n editor = {Sanjurjo González, Hugo and Pastor López, Iker and García Bringas, Pablo and Quintián, Héctor and Corchado, Emilio},\n booktitle = {Hybrid Artificial Intelligent Systems}\n}
\n
\n\n\n
\n This paper presents a comparative analysis of predictive models applied to air quality estimation. Currently, among other global issues, there is a high concern about air pollution, for this reason, there are several air quality indicators, with carbon monoxide (CO), sulfur dioxide (SO2), nitrogen dioxide (NO2) and ozone (O3) being the main ones. When the concentration level of an indicator exceeds an established air quality safety threshold, it is considered harmful to human health, therefore, in cities like London, there are monitoring systems for air pollutants. This study aims to compare the efficiency of stochastic and heuristic predictive models for forecasting ozone (O3) concentration to estimate London's air quality by analyzing an open dataset retrieved from the London Datastore portal. Models based on data analysis have been widely used in air quality forecasting. This paper develops four predictive models (autoregressive integrated moving average - ARIMA, support vector regression - SVR, neural networks (specifically, long-short term memory - LSTM) and Facebook Prophet). Experimentally, ARIMA models and LSTM are proved to reach the highest accuracy in predicting the concentration of air pollutants among the considered models. As a result, the comparative analysis of the loss function (root-mean-square error) reveled that ARIMA and LSTM are the most suitable, accomplishing a low error rate of 0.18 and 0.20, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Predicting High School Students' Academic Performance: A Comparative Study of Supervised Machine Learning Techniques.\n \n \n \n \n\n\n \n Sánchez-Pozo, N., N.; Mejía-Ordóñez, J., S.; Chamorro, D., C.; Mayorca-Torres, D.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n In 2021 Machine Learning-Driven Digital Technologies for Educational Innovation Workshop, pages 1-6, 2021. \n \n\n\n\n
\n\n\n\n \n \n \"PredictingWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Predicting High School Students' Academic Performance: A Comparative Study of Supervised Machine Learning Techniques},\n type = {inproceedings},\n year = {2021},\n pages = {1-6},\n websites = {https://ieeexplore.ieee.org/document/9733756},\n id = {85abd003-18c2-3c0e-9106-24938f031cec},\n created = {2022-03-20T02:14:46.894Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-03-20T02:14:46.894Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {9733756},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {The proliferation of mobile devices and the rapid development of information and communication technologies have revolutionized education. Educational data has evolved to be voluminously massive, broadly various, and produced at high velocity. Therefore, computerized techniques for integrating, processing, and transforming data into valuable knowledge have become necessary to improve internal academic processes. Specifically, educational data mining is an emerging discipline concerned with analyzing the massive amounts of academic data generated and stored by educational institutions. In this sense, machine learning algorithms aid decision-makers who are establishing strategies to improve students' learning experience and institutional effectiveness by revealing hidden patterns in academic performance. Thus, this paper describes our comparative study of machine learning techniques to predict academic performance. We selected the features that best fit the discovery of patterns in the academic performance of high school students, resulting in a balance between accuracy and interpretability. We implemented six supervised learning algorithms for pattern recognition: Light Gradient Boosting Machine, Gradient Boosting, AdaBoost, Logistic Regression, Random Forest, and K-nearest Neighbors. The experimental results showed that the Gradient Boosting (Gbc) algorithm achieved the highest accuracy (96.77%), superior to other classification techniques considered.},\n bibtype = {inproceedings},\n author = {Sánchez-Pozo, Nadia N and Mejía-Ordóñez, Juan S and Chamorro, Diana C and Mayorca-Torres, Dagoberto and Peluffo-Ordóñez, Diego H},\n doi = {10.1109/IEEECONF53024.2021.9733756},\n booktitle = {2021 Machine Learning-Driven Digital Technologies for Educational Innovation Workshop}\n}
\n
\n\n\n
\n The proliferation of mobile devices and the rapid development of information and communication technologies have revolutionized education. Educational data has evolved to be voluminously massive, broadly various, and produced at high velocity. Therefore, computerized techniques for integrating, processing, and transforming data into valuable knowledge have become necessary to improve internal academic processes. Specifically, educational data mining is an emerging discipline concerned with analyzing the massive amounts of academic data generated and stored by educational institutions. In this sense, machine learning algorithms aid decision-makers who are establishing strategies to improve students' learning experience and institutional effectiveness by revealing hidden patterns in academic performance. Thus, this paper describes our comparative study of machine learning techniques to predict academic performance. We selected the features that best fit the discovery of patterns in the academic performance of high school students, resulting in a balance between accuracy and interpretability. We implemented six supervised learning algorithms for pattern recognition: Light Gradient Boosting Machine, Gradient Boosting, AdaBoost, Logistic Regression, Random Forest, and K-nearest Neighbors. The experimental results showed that the Gradient Boosting (Gbc) algorithm achieved the highest accuracy (96.77%), superior to other classification techniques considered.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (14)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Analytic study on the performance of multi-classification approaches in case-based reasoning systems: Medical data exploration.\n \n \n \n \n\n\n \n Bastidas, D.; Piñeros, C.; Peluffo-Ordóñez, D., H.; Sierra, L., M.; Becerra, M., A.; and Umaquinga-Criollo, A., C.\n\n\n \n\n\n\n RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"AnalyticWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Analytic study on the performance of multi-classification approaches in case-based reasoning systems: Medical data exploration},\n type = {article},\n year = {2020},\n keywords = {Case based reasoning,Classifiers fusion,Dermatology disease,Heart disease},\n websites = {https://search.proquest.com/docview/2451420129/fulltextPDF/F4AF5E590BD14D5EPQ/9},\n id = {70ba1b40-85e5-34e3-8fb6-d47f9f074ed8},\n created = {2022-02-02T02:35:15.057Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:15.057Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Bastidas2020},\n private_publication = {false},\n abstract = {This paper compares the main combinations of classifiers (Sequential, Parallel and Stacking) over two remarkable medical data collections: Cleveland and Dermatology. The principal rationale underlying the use of multiple classifiers is that together the methods may be powered rather than their individual behavior. Such a premise is validated through the identification of the best the combination reaching the lowest error rate within a case-based reasoning system (CBR). The different combinations are essentially formed by five different classifiers greatly different regarding their nature and inception: SVM (Support Vector Machines), Parzen, Random Forest, K-NN (k-nearest neighbors) and Naive Bayes. From experimental results, it can be inferred that the combination of techniques is greatly useful. Also, in this work, some key aspects and hints are discussed about the relationship between the nature of the input data and the classification (either individual or mixture of classifiers) stage building within a CBR framework.},\n bibtype = {article},\n author = {Bastidas, David and Piñeros, Camilo and Peluffo-Ordóñez, Diego H. and Sierra, Luz Marina and Becerra, Miguel A. and Umaquinga-Criollo, Ana C.},\n journal = {RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao}\n}
\n
\n\n\n
\n This paper compares the main combinations of classifiers (Sequential, Parallel and Stacking) over two remarkable medical data collections: Cleveland and Dermatology. The principal rationale underlying the use of multiple classifiers is that together the methods may be powered rather than their individual behavior. Such a premise is validated through the identification of the best the combination reaching the lowest error rate within a case-based reasoning system (CBR). The different combinations are essentially formed by five different classifiers greatly different regarding their nature and inception: SVM (Support Vector Machines), Parzen, Random Forest, K-NN (k-nearest neighbors) and Naive Bayes. From experimental results, it can be inferred that the combination of techniques is greatly useful. Also, in this work, some key aspects and hints are discussed about the relationship between the nature of the input data and the classification (either individual or mixture of classifiers) stage building within a CBR framework.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparative study of data mining techniques to reveal patterns of academic performance in secondary education.\n \n \n \n \n\n\n \n Chamorro-Sangoquiza, D., C.; Vargas-Muñoz, A., M.; Umaquinga-Criollo, A., C.; Becerra, M., A.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ComparativeWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Comparative study of data mining techniques to reveal patterns of academic performance in secondary education},\n type = {article},\n year = {2020},\n keywords = {Academic performance patterns,Classifiers,Feature selection,Matlab,Multiple classifier},\n websites = {https://search.proquest.com/docview/2452331372/fulltextPDF/64A2741CD0B646EAPQ/1},\n id = {968697a5-cfc9-37df-b2e1-4e03bb14b9af},\n created = {2022-02-02T02:35:15.367Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:15.367Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Chamorro-Sangoquiza2020},\n private_publication = {false},\n abstract = {The data mining techniques allow for unveiling knowledge from large volumes of information, which have recently been explored in information analysis by educational institutions but already with an increasing demand for this sector to support decision-making. In this research, a methodology for comparing data mining techniques is proposed, which is to be applied to the analysis of academic patrons in students of media education. Multiple methods of selecting attributes are applied to reduce the dimensionality and compare three classifiers and multi-classifiers. The experiments are carried out in a dataset of 285 instances and 36 attributes obtained from an educational survey applied to the students of the School of Education of the University of Barcelona 2017-2018. The best results of classification achieved by the multi-splitter Boosted Tree and Bagged Tree with 93.24% accuracy using the data selected using the BestFirst algorithm.},\n bibtype = {article},\n author = {Chamorro-Sangoquiza, Diana C. and Vargas-Muñoz, Andrés M. and Umaquinga-Criollo, Ana C. and Becerra, Miguel A. and Peluffo-Ordóñez, Diego H.},\n journal = {RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao}\n}
\n
\n\n\n
\n The data mining techniques allow for unveiling knowledge from large volumes of information, which have recently been explored in information analysis by educational institutions but already with an increasing demand for this sector to support decision-making. In this research, a methodology for comparing data mining techniques is proposed, which is to be applied to the analysis of academic patrons in students of media education. Multiple methods of selecting attributes are applied to reduce the dimensionality and compare three classifiers and multi-classifiers. The experiments are carried out in a dataset of 285 instances and 36 attributes obtained from an educational survey applied to the students of the School of Education of the University of Barcelona 2017-2018. The best results of classification achieved by the multi-splitter Boosted Tree and Bagged Tree with 93.24% accuracy using the data selected using the BestFirst algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Forecasting the Consumer Price Index (CPI) of Ecuador: A comparative study of predictive models.\n \n \n \n \n\n\n \n Riofrío, J.; Chang, O.; Revelo-Fuelagán, E., J.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n International Journal on Advanced Science, Engineering and Information Technology. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ForecastingWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 9 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Forecasting the Consumer Price Index (CPI) of Ecuador: A comparative study of predictive models},\n type = {article},\n year = {2020},\n keywords = {Consumer Price Index (CPI),Ecuador,Forecasting,Predictive models},\n websites = {http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=10813},\n id = {6d62ead3-4548-37df-8825-e9b6ff7353fc},\n created = {2022-02-02T02:35:15.649Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:15.649Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Riofrio2020},\n private_publication = {false},\n abstract = {The Consumer Price Index (CPI) is one of the most important economic indicators for countries' characterization and is typically considered an official measure of inflation. The CPI considers the monthly price variation of a determined set of goods and services in a specific region, and it is key in the economic and social planning of a given country, hence the great importance of CPI forecasting. In this paper, we outline a comparative study of state-of-the-art predictive models over an Ecuadorian CPI dataset with 174 monthly registers, from 2005 to 2019. This small available dataset makes forecasting a challenging time-series-prediction task. Another difficulty is last years trend variation, which since mid-2016, has changed from an upward average of 3.5 points to a stable trend of ±0.8 points. This paper explores the performance of relevant predictive models when tackling the Ecuadorian CPI forecasting problem accurately for the next 12 months. For this, a comparative study considering a variety of predictive models is carried out, including the Neural networks approach using a Sequential Model with Long Short-Term Memory layers machine learning using Support Vector Regression, as well as classical approaches like SARIMA and Exponential Smoothing. We also consider big corporations tools like Facebook Prophet. As a result, the paper presents the best predictive models, and parameters found, along with Ecuadors CPI forecasting for the next 12 months (part of 2020). This information could be used for decisionmaking in several important topics related to social and economic activities.},\n bibtype = {article},\n author = {Riofrío, Juan and Chang, Oscar and Revelo-Fuelagán, E. J. and Peluffo-Ordóñez, Diego H.},\n doi = {10.18517/ijaseit.10.3.10813},\n journal = {International Journal on Advanced Science, Engineering and Information Technology}\n}
\n
\n\n\n
\n The Consumer Price Index (CPI) is one of the most important economic indicators for countries' characterization and is typically considered an official measure of inflation. The CPI considers the monthly price variation of a determined set of goods and services in a specific region, and it is key in the economic and social planning of a given country, hence the great importance of CPI forecasting. In this paper, we outline a comparative study of state-of-the-art predictive models over an Ecuadorian CPI dataset with 174 monthly registers, from 2005 to 2019. This small available dataset makes forecasting a challenging time-series-prediction task. Another difficulty is last years trend variation, which since mid-2016, has changed from an upward average of 3.5 points to a stable trend of ±0.8 points. This paper explores the performance of relevant predictive models when tackling the Ecuadorian CPI forecasting problem accurately for the next 12 months. For this, a comparative study considering a variety of predictive models is carried out, including the Neural networks approach using a Sequential Model with Long Short-Term Memory layers machine learning using Support Vector Regression, as well as classical approaches like SARIMA and Exponential Smoothing. We also consider big corporations tools like Facebook Prophet. As a result, the paper presents the best predictive models, and parameters found, along with Ecuadors CPI forecasting for the next 12 months (part of 2020). This information could be used for decisionmaking in several important topics related to social and economic activities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-expert Methods Evaluation on Financial and Economic Data: Introducing Bag of Experts.\n \n \n \n \n\n\n \n Umaquinga-Criollo, A., C.; Tamayo-Quintero, J., D.; Moreno-García, M., N.; Riascos, J., A.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Lecture Notes in Computer Science. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2020},\n websites = {https://link.springer.com/chapter/10.1007/978-3-030-61705-9_36},\n id = {24d80230-957b-3171-bf8b-8cb5e321d171},\n created = {2022-02-02T02:35:15.943Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:15.943Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Umaquinga-Criollo2020},\n private_publication = {false},\n abstract = {The use of machine learning into economics scenarios results appealing since it allows for automatically testing economic models and predict consumer/client behavior to support decision-making processes. The finance market typically uses a set of expert labelers or Bureau credit scores given by governmental or private agencies such as Experian, Equifax, and Creditinfo, among others. This work focuses on introducing a so-named Bag of Expert (BoE): a novel approach for creating multi-expert Learning (MEL) frameworks aimed to emulate real experts labeling (human-given labels) using neural networks. The MEL systems “learn” to perform decision-making tasks by considering a uniform number of labels per sample or individuals along with respective descriptive variables. The BoE is created similarly to Generative Adversarial Network (GANs), but rather than using noise or perturbation by a generator, we trained a feed-forward neural network to randomize sampling data, and either add or decrease hidden neurons. Additionally, this paper aims to investigate the performance on economics-related datasets of several state-of-the-art MEL methods, such as GPC, GPC-PLAT, KAAR, MA-LFC, MA-DGRL, and MA-MAE. To do so, we develop an experimental framework composed of four tests: the first one using novice experts; the second with proficient experts; the third is a mix of novices, intermediate and proficient experts, and the last one uses crowd-sourcing. Our BoE method presents promising results and can be suitable as an alternative to properly assess the reliability of both MEL methods and conventional labeler generators (i.e., virtual expert labelers).},\n bibtype = {inbook},\n author = {Umaquinga-Criollo, A. C. and Tamayo-Quintero, J. D. and Moreno-García, M. N. and Riascos, J. A. and Peluffo-Ordóñez, D. H.},\n doi = {10.1007/978-3-030-61705-9_36},\n chapter = {Multi-expert Methods Evaluation on Financial and Economic Data: Introducing Bag of Experts},\n title = {Lecture Notes in Computer Science}\n}
\n
\n\n\n
\n The use of machine learning into economics scenarios results appealing since it allows for automatically testing economic models and predict consumer/client behavior to support decision-making processes. The finance market typically uses a set of expert labelers or Bureau credit scores given by governmental or private agencies such as Experian, Equifax, and Creditinfo, among others. This work focuses on introducing a so-named Bag of Expert (BoE): a novel approach for creating multi-expert Learning (MEL) frameworks aimed to emulate real experts labeling (human-given labels) using neural networks. The MEL systems “learn” to perform decision-making tasks by considering a uniform number of labels per sample or individuals along with respective descriptive variables. The BoE is created similarly to Generative Adversarial Network (GANs), but rather than using noise or perturbation by a generator, we trained a feed-forward neural network to randomize sampling data, and either add or decrease hidden neurons. Additionally, this paper aims to investigate the performance on economics-related datasets of several state-of-the-art MEL methods, such as GPC, GPC-PLAT, KAAR, MA-LFC, MA-DGRL, and MA-MAE. To do so, we develop an experimental framework composed of four tests: the first one using novice experts; the second with proficient experts; the third is a mix of novices, intermediate and proficient experts, and the last one uses crowd-sourcing. Our BoE method presents promising results and can be suitable as an alternative to properly assess the reliability of both MEL methods and conventional labeler generators (i.e., virtual expert labelers).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Data fusion and information quality for biometric identification from multimodal signals.\n \n \n \n \n\n\n \n Becerra, M., A.; Lasso-Arciniegas, L.; Viveros, A.; Serna-Guarín, L.; Peluffo-Ordóñez, D.; and Tobón, C.\n\n\n \n\n\n\n RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"DataWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Data fusion and information quality for biometric identification from multimodal signals},\n type = {article},\n year = {2020},\n keywords = {Biometry,Data fusion,Information quality,Signal processing},\n websites = {https://search.proquest.com/docview/2385757504?pq-origsite=gscholar&fromopenview=true},\n id = {998c7d41-4de9-3365-9996-913564302c43},\n created = {2022-02-02T02:35:16.222Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:16.222Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Becerra2020a},\n private_publication = {false},\n abstract = {Biometric identification is carried out by processing physiological traits and signals. Biometrics systems are an open field of research and development, since they are permanently susceptible to attacks demanding permanent development to maintain their confidence. The main objective of this study is to analyze the effects of the quality of information on biometric identification and consider it in access control systems. This paper proposes a data fusion model for the development of biometrics systems considering the assessment of information quality. This proposal is based on the JDL (Joint Directors of Laboratories) data fusion model, which includes raw data processing, pattern detection, situation assessment and risk or impact. The results demonstrated the functionality of the proposed model and its potential compared to other traditional identification models.},\n bibtype = {article},\n author = {Becerra, Miguel A. and Lasso-Arciniegas, Laura and Viveros, Andrés and Serna-Guarín, Leonardo and Peluffo-Ordóñez, Diego and Tobón, Catalina},\n journal = {RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao}\n}
\n
\n\n\n
\n Biometric identification is carried out by processing physiological traits and signals. Biometrics systems are an open field of research and development, since they are permanently susceptible to attacks demanding permanent development to maintain their confidence. The main objective of this study is to analyze the effects of the quality of information on biometric identification and consider it in access control systems. This paper proposes a data fusion model for the development of biometrics systems considering the assessment of information quality. This proposal is based on the JDL (Joint Directors of Laboratories) data fusion model, which includes raw data processing, pattern detection, situation assessment and risk or impact. The results demonstrated the functionality of the proposed model and its potential compared to other traditional identification models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Clustering of Reading Ability Performance Variables in the English Language Based on TBL Methodology and Behavior in the Left Hemisphere of the Brain.\n \n \n \n \n\n\n \n Patiño-Alarcón, D., R.; Patiño-Alarcón, F., A.; Lorente-Leyva, L., L.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Communications in Computer and Information Science. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"CommunicationsWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2020},\n websites = {https://link.springer.com/chapter/10.1007/978-3-030-62833-8_7},\n id = {6cafe76b-d0ef-3f3a-a866-59e16b7e0f21},\n created = {2022-02-02T02:35:16.487Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:16.487Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Patino-Alarcon2020},\n private_publication = {false},\n abstract = {This research presents an application of the clustering based on Thinking Based - Learning methodology (TBL), which offers guidelines to promote students’ reflective thinking. Within this methodology, the Intelligence Execution Theory (IET) tool will be used to encourage this kind of thinking in the classroom. Having in mind that, in any educational process, methodologies and pedagogical tools have a pivotal role as they are one of the bases for optimizing cognitive intelligence. In this case, it was given a priority to the potential development of a specific linguistic skill. This study presented a mixed methodology with an exploratory and descriptive scope. The main objective of this research was the clustering of the variables of functioning of the reading ability in the English language based on the TBL methodology and its behavior in the left hemisphere of the brain, specifically to analyze the improvement of the reading ability in the English language of the participants of this case study. With the expectation of generating sustainability of adequate levels of performance, instruction and learning of the English language of students at all levels.},\n bibtype = {inbook},\n author = {Patiño-Alarcón, Delio R. and Patiño-Alarcón, Fernando A. and Lorente-Leyva, Leandro L. and Peluffo-Ordóñez, Diego H.},\n doi = {10.1007/978-3-030-62833-8_7},\n chapter = {Clustering of Reading Ability Performance Variables in the English Language Based on TBL Methodology and Behavior in the Left Hemisphere of the Brain},\n title = {Communications in Computer and Information Science}\n}
\n
\n\n\n
\n This research presents an application of the clustering based on Thinking Based - Learning methodology (TBL), which offers guidelines to promote students’ reflective thinking. Within this methodology, the Intelligence Execution Theory (IET) tool will be used to encourage this kind of thinking in the classroom. Having in mind that, in any educational process, methodologies and pedagogical tools have a pivotal role as they are one of the bases for optimizing cognitive intelligence. In this case, it was given a priority to the potential development of a specific linguistic skill. This study presented a mixed methodology with an exploratory and descriptive scope. The main objective of this research was the clustering of the variables of functioning of the reading ability in the English language based on the TBL methodology and its behavior in the left hemisphere of the brain, specifically to analyze the improvement of the reading ability in the English language of the participants of this case study. With the expectation of generating sustainability of adequate levels of performance, instruction and learning of the English language of students at all levels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Structural capital model for universities based on JDL data fusion model and information quality.\n \n \n \n \n\n\n \n Becerra, M., A.; Londoño-Montoya, E.; Serna-Guarín, L.; Peluffo-Ordóñez, D.; Tobón, C.; and Giraldo, L.\n\n\n \n\n\n\n RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"StructuralWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Structural capital model for universities based on JDL data fusion model and information quality},\n type = {article},\n year = {2020},\n keywords = {Data fusion,Information quality,Intelectual capital,JDL model,Structural capital},\n websites = {https://search.proquest.com/docview/2394535766},\n id = {f9be4d18-f1ef-3861-a373-67cc37b740e8},\n created = {2022-02-02T02:35:16.749Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:16.749Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Becerra2020b},\n private_publication = {false},\n abstract = {Intellectual capital is one of the most critical intangible active assets for universities, and there are multiple models to value it through the human, structural, and relational components. However, this is an open field of research that still demands new solutions to assess it effectively from each of its components. For the assessment of the structural component in higher education institutions, this study proposes a model that combines the assessment of the quality of information and the JDL data fusion model (joint directors of laboratories), which has been used in applications military. The proposed model is original in the methods used and their association, distributed in six levels that execute the pre-processing of the information, valuation of objects, valuation of the situation and the risk, and the refinement of the process. Besides, it evaluates the quality of the information, its traceability, and context to refine the process and obtain a more objective assessment taking into account the imperfection of the information for decision-making in the management of impact and risk. The model not only allows the assessment of structural capital, but also supports decision-making based on the quality of information and its impact. The functionality of the model is described by levels.},\n bibtype = {article},\n author = {Becerra, Miguel A. and Londoño-Montoya, Erika and Serna-Guarín, Leonardo and Peluffo-Ordóñez, Diego and Tobón, Catalina and Giraldo, Lillyana},\n journal = {RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao}\n}
\n
\n\n\n
\n Intellectual capital is one of the most critical intangible active assets for universities, and there are multiple models to value it through the human, structural, and relational components. However, this is an open field of research that still demands new solutions to assess it effectively from each of its components. For the assessment of the structural component in higher education institutions, this study proposes a model that combines the assessment of the quality of information and the JDL data fusion model (joint directors of laboratories), which has been used in applications military. The proposed model is original in the methods used and their association, distributed in six levels that execute the pre-processing of the information, valuation of objects, valuation of the situation and the risk, and the refinement of the process. Besides, it evaluates the quality of the information, its traceability, and context to refine the process and obtain a more objective assessment taking into account the imperfection of the information for decision-making in the management of impact and risk. The model not only allows the assessment of structural capital, but also supports decision-making based on the quality of information and its impact. The functionality of the model is described by levels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Data-Driven Approach for Automatic Classification of Extreme Precipitation Events: Preliminary Results.\n \n \n \n \n\n\n \n González-Vergara, J.; Escobar-González, D.; Chaglla-Aguagallo, D.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n In Communications in Computer and Information Science, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"AWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 11 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {A Data-Driven Approach for Automatic Classification of Extreme Precipitation Events: Preliminary Results},\n type = {inproceedings},\n year = {2020},\n keywords = {Data driven,Extreme precipitation,Feature selection,Forecasting,PCA,Relief,SVM},\n websites = {https://link.springer.com/chapter/10.1007/978-3-030-61702-8_14},\n id = {dfa77441-7e43-3f80-8ab5-f922db4aac87},\n created = {2022-02-02T02:35:17.017Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:17.017Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Gonzalez-Vergara2020},\n private_publication = {false},\n abstract = {Even though there exists no universal definition, in the South America Andean Region, extreme precipitation events can be referred to the period of time in which standard thresholds of precipitation are abruptly exceeded. Therefore, their timely forecasting is of great interest for decision makers from many fields, such as: urban planning entities, water researchers and in general, climate related institutions. In this paper, a data-driven study is performed to classify and anticipate extreme precipitation events through hydroclimate features. Since the analysis of precipitation-events-related time series involves complex patterns, input data requires undergoing both pre-processing steps and feature selection methods, in order to achieve a high performance at the data classification stage itself. In this sense, in this study, both individual Principal Component Analysis (PCA) and Regresional Relief (RR) as well as a cascade approach mixing both are considered. Subsequently, the classification is performed by a Support-Vector-Machine-based classifier (SVM). Results reflect the suitability of an approach involving feature selection and classification for precipitation events detection purposes. A remarkable result is the fact that a reduced dataset obtained by applying RR mixed with PCA discriminates better than RR alone but does not significantly hence the SVM rate at two- and three-class problems as done by PCA itself.},\n bibtype = {inproceedings},\n author = {González-Vergara, J. and Escobar-González, D. and Chaglla-Aguagallo, D. and Peluffo-Ordóñez, D. H.},\n doi = {10.1007/978-3-030-61702-8_14},\n booktitle = {Communications in Computer and Information Science}\n}
\n
\n\n\n
\n Even though there exists no universal definition, in the South America Andean Region, extreme precipitation events can be referred to the period of time in which standard thresholds of precipitation are abruptly exceeded. Therefore, their timely forecasting is of great interest for decision makers from many fields, such as: urban planning entities, water researchers and in general, climate related institutions. In this paper, a data-driven study is performed to classify and anticipate extreme precipitation events through hydroclimate features. Since the analysis of precipitation-events-related time series involves complex patterns, input data requires undergoing both pre-processing steps and feature selection methods, in order to achieve a high performance at the data classification stage itself. In this sense, in this study, both individual Principal Component Analysis (PCA) and Regresional Relief (RR) as well as a cascade approach mixing both are considered. Subsequently, the classification is performed by a Support-Vector-Machine-based classifier (SVM). Results reflect the suitability of an approach involving feature selection and classification for precipitation events detection purposes. A remarkable result is the fact that a reduced dataset obtained by applying RR mixed with PCA discriminates better than RR alone but does not significantly hence the SVM rate at two- and three-class problems as done by PCA itself.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Inverse Data Visualization Framework (IDVF): Towards a Prior-Knowledge-Driven Data Visualization.\n \n \n \n \n\n\n \n Vélez-Falconí, M.; González-Vergara, J.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n In Communications in Computer and Information Science, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"InverseWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Inverse Data Visualization Framework (IDVF): Towards a Prior-Knowledge-Driven Data Visualization},\n type = {inproceedings},\n year = {2020},\n keywords = {Data visualization,Dimensionality reduction,Interaction model,Kernel functions},\n websites = {https://link.springer.com/chapter/10.1007/978-3-030-61702-8_19},\n id = {027aaa3b-52e9-3e29-a60c-95ebce526a92},\n created = {2022-02-02T02:35:17.274Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:17.274Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Velez-Falconi2020},\n private_publication = {false},\n abstract = {Broadly, the area of dimensionality reduction (DR) is aimed at providing ways to harness high dimensional (HD) information through the generation of lower dimensional (LD) representations, by following a certain data-structure-preservation criterion. In literature there have been reported dozens of DR techniques, which are commonly used as a pre-processing stage withing exploratory data analyses for either machine learning or information visualization (IV) purposes. Nonetheless, the selection of a proper method is a nontrivial and -very often- toilsome task. In this sense, a readily and natural way to incorporate an expert’s criterion into the analysis process, while making this task more tractable is the use of interactive IV approaches. Regarding the incorporation of experts’ prior knowledge there still exists a range of open issues. In this work, we introduce a here-named Inverse Data Visualization Framework (IDVF), which is an initial approach to make the input prior knowledge directly interpretable. Our framework is based on 2D-scatter-plots visuals and spectral kernel-driven DR techniques. To capture either the user’s knowledge or requirements, users are requested to provide changes or movements of data points in such a manner that resulting points are located where best convenient according to the user’s criterion. Next, following a Kernel Principal Component Analysis approach and a mixture of kernel matrices, our framework accordingly estimates an approximate LD space. Then, the rationale behind the proposed IDVF is to adjust as accurate as possible the resulting LD space to the representation fulfilling users’ knowledge and requirements. Results are greatly promising and open the possibility to novel DR-based visualizations approaches.},\n bibtype = {inproceedings},\n author = {Vélez-Falconí, M. and González-Vergara, J. and Peluffo-Ordóñez, D. H.},\n doi = {10.1007/978-3-030-61702-8_19},\n booktitle = {Communications in Computer and Information Science}\n}
\n
\n\n\n
\n Broadly, the area of dimensionality reduction (DR) is aimed at providing ways to harness high dimensional (HD) information through the generation of lower dimensional (LD) representations, by following a certain data-structure-preservation criterion. In literature there have been reported dozens of DR techniques, which are commonly used as a pre-processing stage withing exploratory data analyses for either machine learning or information visualization (IV) purposes. Nonetheless, the selection of a proper method is a nontrivial and -very often- toilsome task. In this sense, a readily and natural way to incorporate an expert’s criterion into the analysis process, while making this task more tractable is the use of interactive IV approaches. Regarding the incorporation of experts’ prior knowledge there still exists a range of open issues. In this work, we introduce a here-named Inverse Data Visualization Framework (IDVF), which is an initial approach to make the input prior knowledge directly interpretable. Our framework is based on 2D-scatter-plots visuals and spectral kernel-driven DR techniques. To capture either the user’s knowledge or requirements, users are requested to provide changes or movements of data points in such a manner that resulting points are located where best convenient according to the user’s criterion. Next, following a Kernel Principal Component Analysis approach and a mixture of kernel matrices, our framework accordingly estimates an approximate LD space. Then, the rationale behind the proposed IDVF is to adjust as accurate as possible the resulting LD space to the representation fulfilling users’ knowledge and requirements. Results are greatly promising and open the possibility to novel DR-based visualizations approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Introducing the Concept of Interaction Model for Interactive Dimensionality Reduction and Data Visualization.\n \n \n \n \n\n\n \n Ortega-Bustamante, M., C.; Hasperué, W.; Peluffo-Ordóñez, D., H.; Paéz-Jaime, M.; Marrufo-Rodríguez, I.; Rosero-Montalvo, P.; Umaquinga-Criollo, A., C.; and Vélez-Falconi, M.\n\n\n \n\n\n\n In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2020. \n \n\n\n\n
\n\n\n\n \n \n \"IntroducingWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Introducing the Concept of Interaction Model for Interactive Dimensionality Reduction and Data Visualization},\n type = {inproceedings},\n year = {2020},\n keywords = {Data visualization,Dimensionality reduction,Interaction model,Kernel functions},\n websites = {https://link.springer.com/chapter/10.1007/978-3-030-58802-1_14},\n id = {bb0454ba-bc00-3934-ba49-ddce63ca578d},\n created = {2022-02-02T02:35:17.535Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:17.535Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Ortega-Bustamante2020},\n private_publication = {false},\n abstract = {This letter formally introduces the concept of interaction model (IM), which has been used either directly or tangentially in previous works but never defined. Broadly speaking, an IM consists of the use of a mixture of dimensionality reduction (DR) techniques within an interactive data visualization framework. The rationale of creating an IM is the need for simultaneously harnessing the benefit of several DR approaches to reach a data representation being intelligible and/or fitted to any user’s criterion. As a remarkable advantage, an IM naturally provides a generalized framework for designing both interactive DR approaches as well as readily-to-use data visualization interfaces. In addition to a comprehensive overview on basics of data representation and dimensionality reduction, the main contribution of this manuscript is the elegant definition of the concept of IM in mathematical terms.},\n bibtype = {inproceedings},\n author = {Ortega-Bustamante, M. C. and Hasperué, W. and Peluffo-Ordóñez, D. H. and Paéz-Jaime, M. and Marrufo-Rodríguez, I. and Rosero-Montalvo, P. and Umaquinga-Criollo, A. C. and Vélez-Falconi, M.},\n doi = {10.1007/978-3-030-58802-1_14},\n booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n This letter formally introduces the concept of interaction model (IM), which has been used either directly or tangentially in previous works but never defined. Broadly speaking, an IM consists of the use of a mixture of dimensionality reduction (DR) techniques within an interactive data visualization framework. The rationale of creating an IM is the need for simultaneously harnessing the benefit of several DR approaches to reach a data representation being intelligible and/or fitted to any user’s criterion. As a remarkable advantage, an IM naturally provides a generalized framework for designing both interactive DR approaches as well as readily-to-use data visualization interfaces. In addition to a comprehensive overview on basics of data representation and dimensionality reduction, the main contribution of this manuscript is the elegant definition of the concept of IM in mathematical terms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interactive Visualization Interfaces for Big Data Analysis Using Combination of Dimensionality Reduction Methods: A Brief Review.\n \n \n \n \n\n\n \n Umaquinga-Criollo, A., C.; Peluffo-Ordóñez, D., H.; Rosero-Montalvo, P., D.; Godoy-Trujillo, P., E.; and Benítez-Pereira, H.\n\n\n \n\n\n\n In Advances in Intelligent Systems and Computing, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"InteractiveWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Interactive Visualization Interfaces for Big Data Analysis Using Combination of Dimensionality Reduction Methods: A Brief Review},\n type = {inproceedings},\n year = {2020},\n keywords = {Big data,Business intelligence,Data mining,Dimensionality reduction,Interactive interface},\n websites = {https://link.springer.com/chapter/10.1007/978-3-030-37221-7_17},\n id = {527928c8-6728-3b44-84b6-def0222b68fc},\n created = {2022-02-02T02:35:17.811Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:17.811Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Umaquinga-Criollo2020a},\n private_publication = {false},\n abstract = {The Big Data analysis allows to generate knowledge based on mathematical models that surpass human capabilities, and therefore it is necessary to have robust computer systems. In this connection, the dimensionality reduction (DR) allows to perform approximations to make data perceptible in a simple and compact way while also the computational cost is reduced. Additionally, interactive interfaces enable the user to work with algorithms involving complex mathematical and statistical processes typically aimed at providing weighting factors to each RD algorithm to find the best way to represent data at a low dimension. In this study, a bibliographic re-view of the different models of interactive interfaces for the analysis of Big Data using RD is presented, by considering different, existing proposals and approaches on how to display the information. Particularly, those approaches based on mental processes and uses of color along with an intuitive handling are of special interest.},\n bibtype = {inproceedings},\n author = {Umaquinga-Criollo, Ana C. and Peluffo-Ordóñez, Diego H. and Rosero-Montalvo, Paúl D. and Godoy-Trujillo, Pamela E. and Benítez-Pereira, Henry},\n doi = {10.1007/978-3-030-37221-7_17},\n booktitle = {Advances in Intelligent Systems and Computing}\n}
\n
\n\n\n
\n The Big Data analysis allows to generate knowledge based on mathematical models that surpass human capabilities, and therefore it is necessary to have robust computer systems. In this connection, the dimensionality reduction (DR) allows to perform approximations to make data perceptible in a simple and compact way while also the computational cost is reduced. Additionally, interactive interfaces enable the user to work with algorithms involving complex mathematical and statistical processes typically aimed at providing weighting factors to each RD algorithm to find the best way to represent data at a low dimension. In this study, a bibliographic re-view of the different models of interactive interfaces for the analysis of Big Data using RD is presented, by considering different, existing proposals and approaches on how to display the information. Particularly, those approaches based on mental processes and uses of color along with an intuitive handling are of special interest.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Comparison of Machine Learning and Classical Demand Forecasting Methods: A Case Study of Ecuadorian Textile Industry.\n \n \n \n \n\n\n \n Lorente-Leyva, L., L.; Alemany, M., M., E.; Peluffo-Ordóñez, D., H.; and Herrera-Granda, I., D.\n\n\n \n\n\n\n pages 131-142. Lecture Notes in Computer Science, 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Website\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2020},\n pages = {131-142},\n websites = {http://link.springer.com/10.1007/978-3-030-64580-9_11},\n publisher = {Lecture Notes in Computer Science},\n id = {e6641f60-6d27-3221-bb27-b29352775328},\n created = {2022-02-02T02:35:18.088Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:18.088Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n bibtype = {inbook},\n author = {Lorente-Leyva, Leandro L. and Alemany, M. M. E. and Peluffo-Ordóñez, Diego H. and Herrera-Granda, Israel D.},\n doi = {10.1007/978-3-030-64580-9_11},\n chapter = {A Comparison of Machine Learning and Classical Demand Forecasting Methods: A Case Study of Ecuadorian Textile Industry}\n}
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of kernel functions for the prediction of the photovoltaic energy supply [Comparación de funciones kernel para la predicción de la oferta energética fotovoltaica].\n \n \n \n \n\n\n \n Mora-Paz, H.; Riascos, J., A.; Salazar-Castro, J., A.; Mora, G.; Pantoja, A.; Revelo-Fuelagán, J.; Mancera-Valetts, L.; and Peluffo-Ordoñez, D.\n\n\n \n\n\n\n RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao, 2020(E38): 310-324. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ComparisonWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{\n title = {Comparison of kernel functions for the prediction of the photovoltaic energy supply [Comparación de funciones kernel para la predicción de la oferta energética fotovoltaica]},\n type = {article},\n year = {2020},\n pages = {310-324},\n volume = {2020},\n websites = {https://search.proquest.com/docview/2474915437/fulltextPDF/D88B81E498D44759PQ/1},\n publisher = {Associacao Iberica de Sistemas e Tecnologias de Informacao},\n id = {18dfd857-9a18-3639-890e-fee707d56bd5},\n created = {2022-02-02T02:35:18.446Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:18.446Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Mora-Paz2020310},\n source_type = {article},\n private_publication = {false},\n abstract = {Recently, at the fields of climate change and energy demand have turned their attention to the study and discovery of patterns in renewable energies, such as the photovoltaic-type. Such patterns can be obtained by extrapolating radiation based on the electromagnetic spectrum bands captured by NASA’s Landsat and MODIS satellites, where artificial neural network (ANN) and support vector machine (SVM) algorithms have produced the best models. Nonetheless, the acquisition of training data from those sources is expensive, as well as it lacks the exploration of kernel functions for this application. Therefore, in this study, adjustments were made in the above aspects, mainly through: coupling of new kernels to ANN and SVM in the scikit-learn library, contributing to the reuse and robustness of these algorithms; and implementing an experimental framework to tune hyper-parameters, thus generating results comparable to those reported in the state of the art. © 2020, Associacao Iberica de Sistemas e Tecnologias de Informacao. All rights reserved.},\n bibtype = {article},\n author = {Mora-Paz, H and Riascos, J A and Salazar-Castro, J A and Mora, G and Pantoja, A and Revelo-Fuelagán, J and Mancera-Valetts, L and Peluffo-Ordoñez, D},\n journal = {RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao},\n number = {E38}\n}
\n
\n\n\n
\n Recently, at the fields of climate change and energy demand have turned their attention to the study and discovery of patterns in renewable energies, such as the photovoltaic-type. Such patterns can be obtained by extrapolating radiation based on the electromagnetic spectrum bands captured by NASA’s Landsat and MODIS satellites, where artificial neural network (ANN) and support vector machine (SVM) algorithms have produced the best models. Nonetheless, the acquisition of training data from those sources is expensive, as well as it lacks the exploration of kernel functions for this application. Therefore, in this study, adjustments were made in the above aspects, mainly through: coupling of new kernels to ANN and SVM in the scikit-learn library, contributing to the reuse and robustness of these algorithms; and implementing an experimental framework to tune hyper-parameters, thus generating results comparable to those reported in the state of the art. © 2020, Associacao Iberica de Sistemas e Tecnologias de Informacao. All rights reserved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Forecasting Model to Predict the Demand of Roses in an Ecuadorian Small Business Under Uncertain Scenarios.\n \n \n \n \n\n\n \n Herrera-Granda, I., D.; Lorente-Leyva, L., L.; Peluffo-Ordóñez, D., H.; and Alemany, M., M., E.\n\n\n \n\n\n\n In LOD 2020, pages 245-258, 2020. Lecture Notes in Computer Science\n \n\n\n\n
\n\n\n\n \n \n \"AWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {A Forecasting Model to Predict the Demand of Roses in an Ecuadorian Small Business Under Uncertain Scenarios},\n type = {inproceedings},\n year = {2020},\n pages = {245-258},\n websites = {http://link.springer.com/10.1007/978-3-030-64580-9_21},\n publisher = {Lecture Notes in Computer Science},\n id = {6a81cd19-a7d4-3df6-a599-ff8e15727022},\n created = {2022-02-02T02:35:18.709Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:18.709Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Ecuador is worldwide considered as one of the main natural flower producers and exporters –being roses the most salient ones. Such a fact has naturally led the emergence of small and medium sized companies devoted to the production of quality roses in the Ecuadorian highlands, which intrinsically entails resource usage optimization. One of the first steps towards optimizing the use of resources is to forecast demand, since it enables a fair perspective of the future, in such a manner that the in-advance raw materials supply can be previewed against eventualities, resources usage can be properly planned, as well as the misuse can be avoided. Within this approach, the problem of forecasting the supply of roses was solved into two phases: the first phase consists of the macro-forecast of the total amount to be exported by the Ecuadorian flower sector by the year 2020, using multi-layer neural networks. In the second phase, the monthly demand for the main rose varieties offered by the study company was micro-forecasted by testing seven models. In addition, a Bayesian network model is designed, which takes into consideration macroeconomic aspects, the level of employability in Ecuador and weather-related aspects. This Bayesian network provided satisfactory results without the need for a large amount of historical data and at a low-computational cost.},\n bibtype = {inproceedings},\n author = {Herrera-Granda, Israel D. and Lorente-Leyva, Leandro L. and Peluffo-Ordóñez, Diego H. and Alemany, M. M. E.},\n doi = {10.1007/978-3-030-64580-9_21},\n booktitle = {LOD 2020}\n}
\n
\n\n\n
\n Ecuador is worldwide considered as one of the main natural flower producers and exporters –being roses the most salient ones. Such a fact has naturally led the emergence of small and medium sized companies devoted to the production of quality roses in the Ecuadorian highlands, which intrinsically entails resource usage optimization. One of the first steps towards optimizing the use of resources is to forecast demand, since it enables a fair perspective of the future, in such a manner that the in-advance raw materials supply can be previewed against eventualities, resources usage can be properly planned, as well as the misuse can be avoided. Within this approach, the problem of forecasting the supply of roses was solved into two phases: the first phase consists of the macro-forecast of the total amount to be exported by the Ecuadorian flower sector by the year 2020, using multi-layer neural networks. In the second phase, the monthly demand for the main rose varieties offered by the study company was micro-forecasted by testing seven models. In addition, a Bayesian network model is designed, which takes into consideration macroeconomic aspects, the level of employability in Ecuador and weather-related aspects. This Bayesian network provided satisfactory results without the need for a large amount of historical data and at a low-computational cost.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Adaptation and Recovery Stages for Case-Based Reasoning Systems Using Bayesian Estimation and Density Estimation with Nearest Neighbors.\n \n \n \n \n\n\n \n Bastidas Torres, D.; Piñeros Rodriguez, C.; Peluffo-Ordóñez, D., H.; Blanco Valencia, X.; Revelo-Fuelagán, J.; Becerra, M., A.; Castro-Ospina, A., E.; and Lorente-Leyva, L., L.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 339-350. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2019},\n keywords = {Bayes,Case-based reasoning,Classification,Parametric,Probability},\n pages = {339-350},\n websites = {http://link.springer.com/10.1007/978-3-030-14799-0_29},\n id = {c51496e6-1f31-3395-932e-19157744c8e2},\n created = {2022-02-02T02:35:19.006Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:19.006Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {BastidasTorres2019},\n private_publication = {false},\n abstract = {When searching for better solutions that improve the medical diagnosis accuracy, Case-Based reasoning systems (CBR) arise as a good option. This article seeks to improve these systems through the use of parametric and non-parametric probability estimation methods, particularly, at their recovery and adaptation stages. To this end, a set of experiments are conducted with two essentially different, medical databases (Cardiotocography and Cleveland databases), in order to find good parametric and non-parametric estimators. The results are remarkable as a high accuracy rate is achieved when using explored approaches: Naive Bayes and Nearest Neighbors (K-NN) estimators. In addition, a decrease on the involved processing time is reached, which suggests that proposed estimators incorporated into the recovery and adaptation stage becomes suitable for CBR systems, especially when dealing with support for medical diagnosis applications.},\n bibtype = {inbook},\n author = {Bastidas Torres, D. and Piñeros Rodriguez, C. and Peluffo-Ordóñez, Diego H. and Blanco Valencia, X. and Revelo-Fuelagán, Javier and Becerra, M. A. and Castro-Ospina, A. E. and Lorente-Leyva, Leandro L.},\n doi = {10.1007/978-3-030-14799-0_29},\n chapter = {Adaptation and Recovery Stages for Case-Based Reasoning Systems Using Bayesian Estimation and Density Estimation with Nearest Neighbors},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n When searching for better solutions that improve the medical diagnosis accuracy, Case-Based reasoning systems (CBR) arise as a good option. This article seeks to improve these systems through the use of parametric and non-parametric probability estimation methods, particularly, at their recovery and adaptation stages. To this end, a set of experiments are conducted with two essentially different, medical databases (Cardiotocography and Cleveland databases), in order to find good parametric and non-parametric estimators. The results are remarkable as a high accuracy rate is achieved when using explored approaches: Naive Bayes and Nearest Neighbors (K-NN) estimators. In addition, a decrease on the involved processing time is reached, which suggests that proposed estimators incorporated into the recovery and adaptation stage becomes suitable for CBR systems, especially when dealing with support for medical diagnosis applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification system for corporate reputation based on financial variables.\n \n \n \n \n\n\n \n Londoño-Montoya, E.; Becerra, M., A.; Murillo-Escobar, J.; Gómez-Bayona, L.; Moreno-López, G.; and Peluffo-Ordoñez, D.\n\n\n \n\n\n\n RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"ClassificationWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Classification system for corporate reputation based on financial variables},\n type = {article},\n year = {2019},\n keywords = {Adaptive diffuse inference system,Corporate reputation,Optimization by particle swarm,Reputational index,Vector support machines},\n websites = {https://search.proquest.com/openview/fc081b269b3464d65f6211b07c6ca1e5/},\n id = {3f77b73a-1a0a-370d-b3a5-951dbf7d9a66},\n created = {2022-02-02T02:35:19.288Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:19.288Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Londono-Montoya2019},\n private_publication = {false},\n abstract = {The most important external assessment for companies is reputation, which is very difficult to calculate since its characterization may require a large number of qualitative and quantitative data. This study presents a comparison of different corporate reputation classification systems based on financial variables. Initially, a database was constructed using data from the Corporate Reputation Business Monitor and the Business Information and Reporting System of the Colombian Superintendence of Companies. The records were labeled as high and low. Then, a relevance analysis was carried out, using linear discriminant analysis. Four classifiers (ANFIS, K-NN, F-NN, and SVM-PSO) were compared to categorize the reputation, achieving a performance of 94% accuracy, which allowed to demonstrate the discriminant capacity of the financial variables to classify the reputation.},\n bibtype = {article},\n author = {Londoño-Montoya, Erika and Becerra, Miguel A. and Murillo-Escobar, Juan and Gómez-Bayona, Ledy and Moreno-López, Gustavo and Peluffo-Ordoñez, Diego},\n journal = {RISTI - Revista Iberica de Sistemas e Tecnologias de Informacao}\n}
\n
\n\n\n
\n The most important external assessment for companies is reputation, which is very difficult to calculate since its characterization may require a large number of qualitative and quantitative data. This study presents a comparison of different corporate reputation classification systems based on financial variables. Initially, a database was constructed using data from the Corporate Reputation Business Monitor and the Business Information and Reporting System of the Colombian Superintendence of Companies. The records were labeled as high and low. Then, a relevance analysis was carried out, using linear discriminant analysis. Four classifiers (ANFIS, K-NN, F-NN, and SVM-PSO) were compared to categorize the reputation, achieving a performance of 94% accuracy, which allowed to demonstrate the discriminant capacity of the financial variables to classify the reputation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel-Spectral-Clustering-Driven Motion Segmentation: Rotating-Objects First Trials.\n \n \n \n \n\n\n \n Oña-Rocha, O.; Riascos-Salas, J., A.; Marrufo-Rodríguez, I., C.; Páez-Jaime, M., A.; Mayorca-Torres, D.; Ponce-Guevara, K., L.; Salazar-Castro, J., A.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Communications in Computer and Information Science, pages 30-40. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"CommunicationsWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2019},\n keywords = {Kernels,Motion tracking,Spectral clustering},\n pages = {30-40},\n websites = {http://link.springer.com/10.1007/978-3-030-36636-0_3},\n id = {b4ff31eb-8bda-3276-8468-21f96da4e6c3},\n created = {2022-02-02T02:35:19.557Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:19.557Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Ona-Rocha2019},\n private_publication = {false},\n abstract = {Time-varying data characterization and classification is a field of great interest in both scientific and technology communities. There exists a wide range of applications and challenging open issues such as: automatic motion segmentation, moving-object tracking, and movement forecasting, among others. In this paper, we study the use of the so-called kernel spectral clustering (KSC) approach to capture the dynamic behavior of frames - representing rotating objects - by means of kernel functions and feature relevance values. On the basis of previous research works, we formally derive a here-called tracking vector able to unveil sequential behavior patterns. As a remarkable outcome, we alternatively introduce an encoded version of the tracking vector by converting into decimal numbers the resulting clustering indicators. To evaluate our approach, we test the studied KSC-based tracking over a rotating object from the COIL 20 database. Preliminary results produce clear evidence about the relationship between the clustering indicators and the starting/ending time instance of a specific dynamic sequence.},\n bibtype = {inbook},\n author = {Oña-Rocha, O. and Riascos-Salas, J. A. and Marrufo-Rodríguez, I. C. and Páez-Jaime, M. A. and Mayorca-Torres, D. and Ponce-Guevara, K. L. and Salazar-Castro, J. A. and Peluffo-Ordóñez, D. H.},\n doi = {10.1007/978-3-030-36636-0_3},\n chapter = {Kernel-Spectral-Clustering-Driven Motion Segmentation: Rotating-Objects First Trials},\n title = {Communications in Computer and Information Science}\n}
\n
\n\n\n
\n Time-varying data characterization and classification is a field of great interest in both scientific and technology communities. There exists a wide range of applications and challenging open issues such as: automatic motion segmentation, moving-object tracking, and movement forecasting, among others. In this paper, we study the use of the so-called kernel spectral clustering (KSC) approach to capture the dynamic behavior of frames - representing rotating objects - by means of kernel functions and feature relevance values. On the basis of previous research works, we formally derive a here-called tracking vector able to unveil sequential behavior patterns. As a remarkable outcome, we alternatively introduce an encoded version of the tracking vector by converting into decimal numbers the resulting clustering indicators. To evaluate our approach, we test the studied KSC-based tracking over a rotating object from the COIL 20 database. Preliminary results produce clear evidence about the relationship between the clustering indicators and the starting/ending time instance of a specific dynamic sequence.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Comparative Analysis Between Embedded-Spaces-Based and Kernel-Based Approaches for Interactive Data Representation.\n \n \n \n \n\n\n \n Basante-Villota, C., K.; Ortega-Castillo, C., M.; Peña-Unigarro, D., F.; Revelo-Fuelagán, J., E.; Salazar-Castro, J., A.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Communications in Computer and Information Science, pages 28-38. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"CommunicationsWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2018},\n keywords = {Artificial intelligence,CMDS,Dimensionality reduction methods,Kernel,Kernel PCA,LE,LLE},\n pages = {28-38},\n websites = {http://link.springer.com/10.1007/978-3-319-98998-3_3},\n id = {13f7872e-bc1a-3447-868e-48d7f4778dd2},\n created = {2022-02-02T02:35:19.827Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:19.827Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Basante-Villota2018},\n private_publication = {false},\n abstract = {This work presents a comparative analysis between the linear combination of em-bedded spaces resulting from two approaches: (1) The application of dimensional reduction methods (DR) in their standard implementations, and (2) Their corresponding kernel-based approximations. Namely, considered DR methods are: CMDS (Classical Multi- Dimensional Scaling), LE (Laplacian Eigenmaps) and LLE (Locally Linear Embedding). This study aims at determining -through objective criteria- what approach obtains the best performance of DR task for data visualization. The experimental validation was performed using four databases from the UC Irvine Machine Learning Repository. The quality of the obtained embedded spaces is evaluated regarding the RNX(K) criterion. The RNX(K) allows for evaluating the area under the curve, which indicates the performance of the technique in a global or local topology. Additionally, we measure the computational cost for every comparing experiment. A main contribution of this work is the provided discussion on the selection of an interactivity model when mixturing DR methods, which is a crucial aspect for information visualization purposes.},\n bibtype = {inbook},\n author = {Basante-Villota, C. K. and Ortega-Castillo, C. M. and Peña-Unigarro, D. F. and Revelo-Fuelagán, J. E. and Salazar-Castro, J. A. and Peluffo-Ordóñez, D. H.},\n doi = {10.1007/978-3-319-98998-3_3},\n chapter = {Comparative Analysis Between Embedded-Spaces-Based and Kernel-Based Approaches for Interactive Data Representation},\n title = {Communications in Computer and Information Science}\n}
\n
\n\n\n
\n This work presents a comparative analysis between the linear combination of em-bedded spaces resulting from two approaches: (1) The application of dimensional reduction methods (DR) in their standard implementations, and (2) Their corresponding kernel-based approximations. Namely, considered DR methods are: CMDS (Classical Multi- Dimensional Scaling), LE (Laplacian Eigenmaps) and LLE (Locally Linear Embedding). This study aims at determining -through objective criteria- what approach obtains the best performance of DR task for data visualization. The experimental validation was performed using four databases from the UC Irvine Machine Learning Repository. The quality of the obtained embedded spaces is evaluated regarding the RNX(K) criterion. The RNX(K) allows for evaluating the area under the curve, which indicates the performance of the technique in a global or local topology. Additionally, we measure the computational cost for every comparing experiment. A main contribution of this work is the provided discussion on the selection of an interactivity model when mixturing DR methods, which is a crucial aspect for information visualization purposes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Developments on Solutions of the Normalized-Cut-Clustering Problem Without Eigenvectors.\n \n \n \n \n\n\n \n Lorente-Leyva, L., L.; Herrera-Granda, I., D.; Rosero-Montalvo, P., D.; Ponce-Guevara, K., L.; Castro-Ospina, A., E.; Becerra, M., A.; Peluffo-Ordóñez, D., H.; and Rodríguez-Sotelo, J., L.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 318-328. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 6 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2018},\n keywords = {Eigenvectors,Graph-based clustering,Normalized cut clustering,Quadratic forms},\n pages = {318-328},\n websites = {http://link.springer.com/10.1007/978-3-319-92537-0_37},\n id = {c5c701b3-5c1a-39df-9068-b3bd77248f26},\n created = {2022-02-02T02:35:20.105Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:20.105Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Lorente-Leyva2018},\n private_publication = {false},\n abstract = {Normalized-cut clustering (NCC) is a benchmark graph-based approach for unsupervised data analysis. Since its traditional formulation is a quadratic form subject to orthogonality conditions, it is often solved within an eigenvector-based framework. Nonetheless, in some cases the calculation of eigenvectors is prohibitive or unfeasible due to the involved computational cost – for instance, when dealing with high dimensional data. In this work, we present an overview of recent developments on approaches to solve the NCC problem with no requiring the calculation of eigenvectors. Particularly, heuristic-search and quadratic-formulation-based approaches are studied. Such approaches are elegantly deduced and explained, as well as simple ways to implement them are provided.},\n bibtype = {inbook},\n author = {Lorente-Leyva, Leandro Leonardo and Herrera-Granda, Israel David and Rosero-Montalvo, Paul D. and Ponce-Guevara, Karina L. and Castro-Ospina, Andrés Eduardo and Becerra, Miguel A. and Peluffo-Ordóñez, Diego Hernán and Rodríguez-Sotelo, José Luis},\n doi = {10.1007/978-3-319-92537-0_37},\n chapter = {Developments on Solutions of the Normalized-Cut-Clustering Problem Without Eigenvectors},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n Normalized-cut clustering (NCC) is a benchmark graph-based approach for unsupervised data analysis. Since its traditional formulation is a quadratic form subject to orthogonality conditions, it is often solved within an eigenvector-based framework. Nonetheless, in some cases the calculation of eigenvectors is prohibitive or unfeasible due to the involved computational cost – for instance, when dealing with high dimensional data. In this work, we present an overview of recent developments on approaches to solve the NCC problem with no requiring the calculation of eigenvectors. Particularly, heuristic-search and quadratic-formulation-based approaches are studied. Such approaches are elegantly deduced and explained, as well as simple ways to implement them are provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Case-Based Reasoning Systems for Medical Applications with Improved Adaptation and Recovery Stages.\n \n \n \n \n\n\n \n Blanco Valencia, X.; Bastidas Torres, D.; Piñeros Rodriguez, C.; Peluffo-Ordóñez, D., H.; Becerra, M., A.; and Castro-Ospina, A., E.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 26-38. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2018},\n keywords = {Cascade classification,Case-based reasoning,Preprocessing,Probability},\n pages = {26-38},\n websites = {http://link.springer.com/10.1007/978-3-319-78723-7_3},\n id = {ced5c4c3-f345-37a7-8703-73a697ec4611},\n created = {2022-02-02T02:35:20.375Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:20.375Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {BlancoValencia2018},\n private_publication = {false},\n abstract = {Case-Based Reasoning Systems (CBR) are in constant evolution, as a result, this article proposes improving the retrieve and adaption stages through a different approach. A series of experiments were made, divided in three sections: a proper pre-processing technique, a cascade classification, and a probability estimation procedure. Every stage offers an improvement, a better data representation, a more efficient classification, and a more precise probability estimation provided by a Support Vector Machine (SVM) estimator regarding more common approaches. Concluding, more complex techniques for classification and probability estimation are possible, improving CBR systems performance due to lower classification error in general cases.},\n bibtype = {inbook},\n author = {Blanco Valencia, X. and Bastidas Torres, D. and Piñeros Rodriguez, C. and Peluffo-Ordóñez, D. H. and Becerra, M. A. and Castro-Ospina, A. E.},\n doi = {10.1007/978-3-319-78723-7_3},\n chapter = {Case-Based Reasoning Systems for Medical Applications with Improved Adaptation and Recovery Stages},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n Case-Based Reasoning Systems (CBR) are in constant evolution, as a result, this article proposes improving the retrieve and adaption stages through a different approach. A series of experiments were made, divided in three sections: a proper pre-processing technique, a cascade classification, and a probability estimation procedure. Every stage offers an improvement, a better data representation, a more efficient classification, and a more precise probability estimation provided by a Support Vector Machine (SVM) estimator regarding more common approaches. Concluding, more complex techniques for classification and probability estimation are possible, improving CBR systems performance due to lower classification error in general cases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalized Low-Computational Cost Laplacian Eigenmaps.\n \n \n \n \n\n\n \n Salazar-Castro, J., A.; Peña, D., F.; Basante, C.; Ortega, C.; Cruz-Cruz, L.; Revelo-Fuelagán, J.; Blanco-Valencia, X., P.; Castellanos-Domínguez, G.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 661-669. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2018},\n keywords = {Dimensionality reduction,Generalized methodology,Kernel approximations,Low-computational cost,Multiple kernel learning,Spectral methods},\n pages = {661-669},\n websites = {http://link.springer.com/10.1007/978-3-030-03493-1_69},\n id = {2b000186-83f4-39c2-bd9f-5a26c0b548ae},\n created = {2022-02-02T02:35:20.659Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:20.659Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Salazar-Castro2018},\n private_publication = {false},\n abstract = {Dimensionality reduction (DR) is a methodology used in many fields linked to data processing, and may represent a preprocessing stage or be an essential element for the representation and classification of data. The main objective of DR is to obtain a new representation of the original data in a space of smaller dimension, such that more refined information is produced, as well as the time of the subsequent processing is decreased and/or visual representations more intelligible for human beings are generated. The spectral DR methods involve the calculation of an eigenvalue and eigenvector decomposition, which is usually high-computational-cost demanding, and, therefore, the task of obtaining a more dynamic and interactive user-machine integration is difficult. Therefore, for the design of an interactive IV system based on DR spectral methods, it is necessary to propose a strategy to reduce the computational cost required in the calculation of eigenvectors and eigenvalues. For this purpose, it is proposed to use locally linear submatrices and spectral embedding. This allows integrating natural intelligence with computational intelligence for the representation of data interactively, dynamically and at low computational cost. Additionally, an interactive model is proposed that allows the user to dynamically visualize the data through a weighted mixture.},\n bibtype = {inbook},\n author = {Salazar-Castro, J. A. and Peña, D. F. and Basante, C. and Ortega, C. and Cruz-Cruz, L. and Revelo-Fuelagán, J. and Blanco-Valencia, X. P. and Castellanos-Domínguez, G. and Peluffo-Ordóñez, D. H.},\n doi = {10.1007/978-3-030-03493-1_69},\n chapter = {Generalized Low-Computational Cost Laplacian Eigenmaps},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n Dimensionality reduction (DR) is a methodology used in many fields linked to data processing, and may represent a preprocessing stage or be an essential element for the representation and classification of data. The main objective of DR is to obtain a new representation of the original data in a space of smaller dimension, such that more refined information is produced, as well as the time of the subsequent processing is decreased and/or visual representations more intelligible for human beings are generated. The spectral DR methods involve the calculation of an eigenvalue and eigenvector decomposition, which is usually high-computational-cost demanding, and, therefore, the task of obtaining a more dynamic and interactive user-machine integration is difficult. Therefore, for the design of an interactive IV system based on DR spectral methods, it is necessary to propose a strategy to reduce the computational cost required in the calculation of eigenvectors and eigenvalues. For this purpose, it is proposed to use locally linear submatrices and spectral embedding. This allows integrating natural intelligence with computational intelligence for the representation of data interactively, dynamically and at low computational cost. Additionally, an interactive model is proposed that allows the user to dynamically visualize the data through a weighted mixture.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Color-Based Data Visualization Approach Using a Circular Interaction Model and Dimensionality Reduction.\n \n \n \n \n\n\n \n Salazar-Castro, J., A.; Rosero-Montalvo, P., D.; Peña-Unigarro, D., F.; Umaquinga-Criollo, A., C.; Castillo-Marrero, Z.; Revelo-Fuelagán, E., J.; Peluffo-Ordóñez, D., H.; and Castellanos-Domínguez, C., G.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 557-567. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2018},\n keywords = {Data visualization,Dimensionality reduction,Interactive interface,Pairwise similarity},\n pages = {557-567},\n websites = {http://link.springer.com/10.1007/978-3-319-92537-0_64},\n id = {a4ecc1df-e72e-3ad0-b114-b27746f7c9ee},\n created = {2022-02-02T02:35:20.927Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:20.927Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Salazar-Castro2018a},\n private_publication = {false},\n abstract = {Dimensionality reduction (DR) methods are able to produce low-dimensional representations of an input data sets which may become intelligible for human perception. Nonetheless, most existing DR approaches lack the ability to naturally provide the users with the faculty of controlability and interactivity. In this connection, data visualization (DataVis) results in an ideal complement. This work presents an integration of DR and DataVis through a new approach for data visualization based on a mixture of DR resultant representations while using visualization principle. Particularly, the mixture is done through a weighted sum, whose weighting factors are defined by the user through a novel interface. The interface’s concept relies on the combination of the color-based and geometrical perception in a circular framework so that the users may have a at hand several indicators (shape, color, surface size) to make a decision on a specific data representation. Besides, pairwise similarities are plotted as a non-weighted graph to include a graphic notion of the structure of input data. Therefore, the proposed visualization approach enables the user to interactively combine DR methods, while providing information about the structure of original data, making then the selection of a DR scheme more intuitive.},\n bibtype = {inbook},\n author = {Salazar-Castro, Jose Alejandro and Rosero-Montalvo, Paul D. and Peña-Unigarro, Diego Fernando and Umaquinga-Criollo, Ana Cristina and Castillo-Marrero, Zenaida and Revelo-Fuelagán, Edgardo Javier and Peluffo-Ordóñez, Diego Hernán and Castellanos-Domínguez, César Germán},\n doi = {10.1007/978-3-319-92537-0_64},\n chapter = {A Novel Color-Based Data Visualization Approach Using a Circular Interaction Model and Dimensionality Reduction},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n Dimensionality reduction (DR) methods are able to produce low-dimensional representations of an input data sets which may become intelligible for human perception. Nonetheless, most existing DR approaches lack the ability to naturally provide the users with the faculty of controlability and interactivity. In this connection, data visualization (DataVis) results in an ideal complement. This work presents an integration of DR and DataVis through a new approach for data visualization based on a mixture of DR resultant representations while using visualization principle. Particularly, the mixture is done through a weighted sum, whose weighting factors are defined by the user through a novel interface. The interface’s concept relies on the combination of the color-based and geometrical perception in a circular framework so that the users may have a at hand several indicators (shape, color, surface size) to make a decision on a specific data representation. Besides, pairwise similarities are plotted as a non-weighted graph to include a graphic notion of the structure of input data. Therefore, the proposed visualization approach enables the user to interactively combine DR methods, while providing information about the structure of original data, making then the selection of a DR scheme more intuitive.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Angle-Based Model for Interactive Dimensionality Reduction and Data Visualization.\n \n \n \n \n\n\n \n Basante-Villota, C., K.; Ortega-Castillo, C., M.; Peña-Unigarro, D., F.; Revelo-Fuelagán, E., J.; Salazar-Castro, J., A.; Ortega-Bustamante, M.; Rosero-Montalvo, P.; Vega-Escobar, L., S.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 149-157. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2018},\n keywords = {Data visualization,Dimensionality reduction,Kernel PCA,Pairwise similarity},\n pages = {149-157},\n websites = {http://link.springer.com/10.1007/978-3-030-01132-1_17},\n id = {e8e9dc2a-48ca-399f-9228-cc0fe945d246},\n created = {2022-02-02T02:35:21.208Z},\n file_attached = {false},\n profile_id = {aba9653c-d139-3f95-aad8-969c487ed2f3},\n group_id = {baf47cb8-5222-3492-962e-1467155db3dc},\n last_modified = {2022-02-02T02:35:21.208Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n citation_key = {Basante-Villota2018a},\n private_publication = {false},\n abstract = {In recent times, an undeniable fact is that the amount of data available has increased dramatically due mainly to the advance of new technologies allowing for storage and communication of enormous volumes of information. In consequence, there is an important need for finding the relevant information within the raw data through the application of novel data visualization techniques that permit the correct manipulation of data. This issue has motivated the development of graphic forms for visually representing and analyzing high-dimensional data. Particularly, in this work, we propose a graphical approach, which, allows the combination of dimensionality reduction (DR) methods using an angle-based model, making the data visualization more intelligible. Such approach is designed for a readily use, so that the input parameters are interactively given by the user within a user-friendly environment. The proposed approach enables users (even those being non-experts) to intuitively select a particular DR method or perform a mixture of methods. The experimental results prove that the interactive manipulation enabled by the here-proposed model-due to its ability of displaying a variety of embedded spaces-makes the task of selecting a embedded space simpler and more adequately fitted for a specific need.},\n bibtype = {inbook},\n author = {Basante-Villota, Cielo K. and Ortega-Castillo, Carlos M. and Peña-Unigarro, Diego F. and Revelo-Fuelagán, E. Javier and Salazar-Castro, Jose A. and Ortega-Bustamante, MacArthur and Rosero-Montalvo, Paul and Vega-Escobar, Laura Stella and Peluffo-Ordóñez, Diego H.},\n doi = {10.1007/978-3-030-01132-1_17},\n chapter = {Angle-Based Model for Interactive Dimensionality Reduction and Data Visualization},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n In recent times, an undeniable fact is that the amount of data available has increased dramatically due mainly to the advance of new technologies allowing for storage and communication of enormous volumes of information. In consequence, there is an important need for finding the relevant information within the raw data through the application of novel data visualization techniques that permit the correct manipulation of data. This issue has motivated the development of graphic forms for visually representing and analyzing high-dimensional data. Particularly, in this work, we propose a graphical approach, which, allows the combination of dimensionality reduction (DR) methods using an angle-based model, making the data visualization more intelligible. Such approach is designed for a readily use, so that the input parameters are interactively given by the user within a user-friendly environment. The proposed approach enables users (even those being non-experts) to intuitively select a particular DR method or perform a mixture of methods. The experimental results prove that the interactive manipulation enabled by the here-proposed model-due to its ability of displaying a variety of embedded spaces-makes the task of selecting a embedded space simpler and more adequately fitted for a specific need.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);