var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/service/mendeley/e5f1b339-ec56-313b-b123-fd0a1c527f0d?jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/service/mendeley/e5f1b339-ec56-313b-b123-fd0a1c527f0d?jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/service/mendeley/e5f1b339-ec56-313b-b123-fd0a1c527f0d?jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2024\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Exploring the Potential of Genetic Algorithms for Optimizing Academic Schedules at the School of Mechatronic Engineering: Preliminary Results.\n \n \n \n \n\n\n \n Alarcón, J.; Buitrón, S.; Carrillo, A.; Chuquimarca, M.; Ortiz, A.; Guachi, R.; Peluffo-Ordóñez, D., H.; and Guachi-Guachi, L.\n\n\n \n\n\n\n Communications in Computer and Information Science, 1874 CCIS: 390-402. 2024.\n \n\n\n\n
\n\n\n\n \n \n \"ExploringWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Exploring the Potential of Genetic Algorithms for Optimizing Academic Schedules at the School of Mechatronic Engineering: Preliminary Results},\n type = {article},\n year = {2024},\n keywords = {Equitable schedules,Genetic algorithm optimization,Resource allocation,Scheduling generation},\n pages = {390-402},\n volume = {1874 CCIS},\n websites = {https://link.springer.com/chapter/10.1007/978-3-031-46813-1_26},\n publisher = {Springer Science and Business Media Deutschland GmbH},\n id = {7b620013-2424-33ad-88a1-9b389f17f8bc},\n created = {2024-02-17T22:16:43.486Z},\n accessed = {2024-02-17},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2024-04-06T11:54:20.918Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {The generation of schedules is a complex challenge, particularly in academic institutions aiming for equitable scheduling. The goal is to achieve fair and balanced schedules that meet the requirements of all parties involved, such as workload, class distribution, shifts, and other relevant criteria. To address this challenge, a genetic algorithm specifically designed for optimal schedule generation has been proposed as a solution. Adjusting genetic algorithm parameters impacts performance, and employing parameter optimization techniques effectively tackles this issue. This work introduces a genetic algorithm for optimal schedule generation, utilizing suitable encoding and operators, and evaluating quality through fitness techniques. Optimization efforts led to reduced execution time, improved solution quality, and positive outcomes like faster execution, fewer generations, increased stability, and convergence to optimal solutions.},\n bibtype = {article},\n author = {Alarcón, Johan and Buitrón, Samantha and Carrillo, Alexis and Chuquimarca, Mateo and Ortiz, Alexis and Guachi, Robinson and Peluffo-Ordóñez, D. H. and Guachi-Guachi, Lorena},\n doi = {10.1007/978-3-031-46813-1_26/COVER},\n journal = {Communications in Computer and Information Science}\n}
\n
\n\n\n
\n The generation of schedules is a complex challenge, particularly in academic institutions aiming for equitable scheduling. The goal is to achieve fair and balanced schedules that meet the requirements of all parties involved, such as workload, class distribution, shifts, and other relevant criteria. To address this challenge, a genetic algorithm specifically designed for optimal schedule generation has been proposed as a solution. Adjusting genetic algorithm parameters impacts performance, and employing parameter optimization techniques effectively tackles this issue. This work introduces a genetic algorithm for optimal schedule generation, utilizing suitable encoding and operators, and evaluating quality through fitness techniques. Optimization efforts led to reduced execution time, improved solution quality, and positive outcomes like faster execution, fewer generations, increased stability, and convergence to optimal solutions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2023\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Convolutional Neural Network for Background Removal in Close Range Photogrammetry: Application on Cultural Heritage Artefacts.\n \n \n \n\n\n \n Bici, M.; Gherardini, F.; de Los Angeles Guachi-Guachi, L.; Guachi, R.; and Campana, F.\n\n\n \n\n\n\n In Lecture Notes in Mechanical Engineering, 2023. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Convolutional Neural Network for Background Removal in Close Range Photogrammetry: Application on Cultural Heritage Artefacts},\n type = {inproceedings},\n year = {2023},\n keywords = {CNN,Close range photogrammetry,Cultural heritage preservation,MobilenetV2,Reverse engineering,U-Net},\n id = {0e93ccaf-c36f-335e-92ad-42c7d8411e8e},\n created = {2023-05-10T21:10:45.736Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2024-02-17T22:18:27.431Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Post-processing pipeline for image analysis in reverse engineering modelling, such as photogrammetry applications, still asks for manual interventions mainly for shadows and reflections corrections and, often, for background removal. The usage of Convolutional Neural Network (CNN) may conveniently help in recognition and background removal. This paper presents an approach based on CNN for background removal, assessing its efficiency. Its relevance pertains to a comparison of CNN approaches versus manual assessment, in terms of accuracy versus automation with reference to cultural heritage targets. Through a bronze statue test case, pros and cons are discussed with respect to the final model accuracy. The adopted CNN is based on the U-NetMobilenetV2 architecture, a combination of two deep networks, to converge faster and achieve higher efficiency with small datasets. The used dataset consists of over 700 RGB images used to provide knowledge from which CNNs can extract features and distinguish the pixels of the statue from background ones. To extend CNN capabilities, training sets with and without dataset integration are investigated. Dice coefficient is applied to evaluate the CNN efficiency. Results obtained are used for the photogrammetric reconstruction of the Principe Ellenistico model. This 3D model is compared with a model obtained through a 3D scanner. Moreover, through a comparison with a photogrammetric 3D model obtained without the CNN background removal, performances are evaluated. Although few errors due to bad light conditions, the advantages in terms of process automation are consistent (over 50% in time reduction).},\n bibtype = {inproceedings},\n author = {Bici, Michele and Gherardini, Francesco and de Los Angeles Guachi-Guachi, Lorena and Guachi, Robinson and Campana, Francesca},\n doi = {10.1007/978-3-031-15928-2_68},\n booktitle = {Lecture Notes in Mechanical Engineering}\n}
\n
\n\n\n
\n Post-processing pipeline for image analysis in reverse engineering modelling, such as photogrammetry applications, still asks for manual interventions mainly for shadows and reflections corrections and, often, for background removal. The usage of Convolutional Neural Network (CNN) may conveniently help in recognition and background removal. This paper presents an approach based on CNN for background removal, assessing its efficiency. Its relevance pertains to a comparison of CNN approaches versus manual assessment, in terms of accuracy versus automation with reference to cultural heritage targets. Through a bronze statue test case, pros and cons are discussed with respect to the final model accuracy. The adopted CNN is based on the U-NetMobilenetV2 architecture, a combination of two deep networks, to converge faster and achieve higher efficiency with small datasets. The used dataset consists of over 700 RGB images used to provide knowledge from which CNNs can extract features and distinguish the pixels of the statue from background ones. To extend CNN capabilities, training sets with and without dataset integration are investigated. Dice coefficient is applied to evaluate the CNN efficiency. Results obtained are used for the photogrammetric reconstruction of the Principe Ellenistico model. This 3D model is compared with a model obtained through a 3D scanner. Moreover, through a comparison with a photogrammetric 3D model obtained without the CNN background removal, performances are evaluated. Although few errors due to bad light conditions, the advantages in terms of process automation are consistent (over 50% in time reduction).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Interactive Information Visualization Models: A Systematic Literature Review.\n \n \n \n\n\n \n Ortega-Bustamante, M., A.; Hasperué, W.; Peluffo-Ordóñez, D., H.; Imbaquingo, D.; Raki, H.; Aalaila, Y.; Elhamdi, M.; and Guachi-Guachi, L.\n\n\n \n\n\n\n In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2023. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Interactive Information Visualization Models: A Systematic Literature Review},\n type = {inproceedings},\n year = {2023},\n keywords = {dimensionality reduction,information visualization,interactive models},\n id = {5c03265c-4a79-3233-b927-1c24ec9f47e2},\n created = {2023-11-03T13:02:03.891Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2024-02-17T22:18:27.404Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Interactive information visualization models aim to make dimensionality reduction (DR) accessible to non-expert users through interactive visualization frameworks. This systematic literature review explores the role of DR and information visualization (IV) techniques in interactive models (IM). We search relevant bibliographic databases, including IEEE Xplore, Springer Link, and Web of Science, for publications from the last five years. We identify 1448 scientific articles, which we then narrow down to 52 after screening and selection. This study addresses three research questions, revealing that the number of articles focused on interactive DR-oriented models has been in the minority in the last five years. However, related topics such as IV techniques or RD methods have increased. Trends are identified in the development of interactive models, as well as in IV techniques and RD methods. For example, researchers are increasingly proposing new DR methods or modifying existing ones rather than relying solely on established techniques. Furthermore, scatter plots have emerged as the predominant option for IV in interactive models, with limited options for customizing the display of raw data and details in application windows. Overall, this review provides insights into the current state of interactive IV models for DR and highlights areas for further research.},\n bibtype = {inproceedings},\n author = {Ortega-Bustamante, MacArthur A. and Hasperué, Waldo and Peluffo-Ordóñez, Diego H. and Imbaquingo, Daisy and Raki, Hind and Aalaila, Yahya and Elhamdi, Mouad and Guachi-Guachi, Lorena},\n doi = {10.1007/978-3-031-36805-9_43},\n booktitle = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n Interactive information visualization models aim to make dimensionality reduction (DR) accessible to non-expert users through interactive visualization frameworks. This systematic literature review explores the role of DR and information visualization (IV) techniques in interactive models (IM). We search relevant bibliographic databases, including IEEE Xplore, Springer Link, and Web of Science, for publications from the last five years. We identify 1448 scientific articles, which we then narrow down to 52 after screening and selection. This study addresses three research questions, revealing that the number of articles focused on interactive DR-oriented models has been in the minority in the last five years. However, related topics such as IV techniques or RD methods have increased. Trends are identified in the development of interactive models, as well as in IV techniques and RD methods. For example, researchers are increasingly proposing new DR methods or modifying existing ones rather than relying solely on established techniques. Furthermore, scatter plots have emerged as the predominant option for IV in interactive models, with limited options for customizing the display of raw data and details in application windows. Overall, this review provides insights into the current state of interactive IV models for DR and highlights areas for further research.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convolutional neural networks applied to microtomy: Identifying the trimming-end cutting routine on paraffin-embedded tissue blocks.\n \n \n \n \n\n\n \n Guachi-Guachi, L.; Ruspi, J.; Scarlino, P.; Poliziani, A.; Ciancia, S.; Lunni, D.; Baldi, G.; Cavazzana, A.; Zucca, A.; Bellini, M.; Pedrazzini, G., A.; Ciuti, G.; Controzzi, M.; Vannozzi, L.; and Ricotti, L.\n\n\n \n\n\n\n Engineering Applications of Artificial Intelligence, 126: 106963. 11 2023.\n \n\n\n\n
\n\n\n\n \n \n \"ConvolutionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Convolutional neural networks applied to microtomy: Identifying the trimming-end cutting routine on paraffin-embedded tissue blocks},\n type = {article},\n year = {2023},\n keywords = {Anatomic pathology,Deep learning,Microtomy,Trimming},\n pages = {106963},\n volume = {126},\n month = {11},\n publisher = {Pergamon},\n day = {1},\n id = {5544ce2a-08f6-3c83-994f-7c5e95f2b6ca},\n created = {2024-02-17T22:04:55.409Z},\n accessed = {2024-02-17},\n file_attached = {true},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2024-02-17T22:18:27.215Z},\n read = {true},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {In the field of histopathology, the microtomy procedure yields thin sections of tissue embedded in paraffin blocks, then to be further processed for diagnostic purposes. Within microtomy, trimming is an initial but critical process in which the excess paraffin covering the tissue of interest is removed by continuous cutting routines, until the tissue is suitably exposed and ready for sectioning. Trimming is currently a time-consuming process that is manually held by technicians. In this paper, we present a method to automatize this process, by analyzing tissue block surface images resulting from each cyclic cutting routine. Two types of Convolutional Neural Networks (CNNs) were fine-tuned: one for binary segmentation, the other for multi-class classification tasks, by exploring and optimizing lightweight architectures to provide fast analytical results on cost-effective edge computing. Two sequential online conditions followed the CNNs to output the current stage of the block surface and rule if the trimming-end cutting routine was reached. We compared the results obtained through our method with the ones obtained by three skilled technicians processing 75 tissue blocks. The proposed method identified the trimming-end cutting routine approximately as accurate as the technicians did, yielding up to 90% of trimmed blocks of optimal quality and evidencing the potential of this tool in future automated trimming instruments. We deployed our method to an Edge TPU hardware accelerator to showcase its capability to provide immediate and objective results at every microtomy station applied with an ad hoc hardware, potentially guaranteeing a throughput 50% higher than manual trimming.},\n bibtype = {article},\n author = {Guachi-Guachi, Lorena and Ruspi, Jacopo and Scarlino, Paola and Poliziani, Aliria and Ciancia, Sabrina and Lunni, Dario and Baldi, Gabriele and Cavazzana, Andrea and Zucca, Alessandra and Bellini, Marco and Pedrazzini, Gian Andrea and Ciuti, Gastone and Controzzi, Marco and Vannozzi, Lorenzo and Ricotti, Leonardo},\n doi = {10.1016/J.ENGAPPAI.2023.106963},\n journal = {Engineering Applications of Artificial Intelligence}\n}
\n
\n\n\n
\n In the field of histopathology, the microtomy procedure yields thin sections of tissue embedded in paraffin blocks, then to be further processed for diagnostic purposes. Within microtomy, trimming is an initial but critical process in which the excess paraffin covering the tissue of interest is removed by continuous cutting routines, until the tissue is suitably exposed and ready for sectioning. Trimming is currently a time-consuming process that is manually held by technicians. In this paper, we present a method to automatize this process, by analyzing tissue block surface images resulting from each cyclic cutting routine. Two types of Convolutional Neural Networks (CNNs) were fine-tuned: one for binary segmentation, the other for multi-class classification tasks, by exploring and optimizing lightweight architectures to provide fast analytical results on cost-effective edge computing. Two sequential online conditions followed the CNNs to output the current stage of the block surface and rule if the trimming-end cutting routine was reached. We compared the results obtained through our method with the ones obtained by three skilled technicians processing 75 tissue blocks. The proposed method identified the trimming-end cutting routine approximately as accurate as the technicians did, yielding up to 90% of trimmed blocks of optimal quality and evidencing the potential of this tool in future automated trimming instruments. We deployed our method to an Edge TPU hardware accelerator to showcase its capability to provide immediate and objective results at every microtomy station applied with an ad hoc hardware, potentially guaranteeing a throughput 50% higher than manual trimming.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Exploration of Kernel Functions Potential for Data Representation and Classification: A First Step Toward Interactive Interpretable Dimensionality Reduction.\n \n \n \n \n\n\n \n Aalaila, Y.; Bachchar, I.; Raki, H.; Bamansour, S.; Elhamdi, M.; Benghzial, K.; Ortega-Bustamante, M.; Guachi-Guachi, L.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n SN Computer Science, 5(1): 1-8. 12 2023.\n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n \n \"JointWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Joint Exploration of Kernel Functions Potential for Data Representation and Classification: A First Step Toward Interactive Interpretable Dimensionality Reduction},\n type = {article},\n year = {2023},\n keywords = {Computer Imaging,Computer Science,Computer Systems Organization and Communication Ne,Data Structures and Information Theory,Information Systems and Communication Service,Pattern Recognition and Graphics,Software Engineering/Programming and Operating Sys,Vision,general},\n pages = {1-8},\n volume = {5},\n websites = {https://link.springer.com/article/10.1007/s42979-023-02405-9},\n month = {12},\n publisher = {Springer Science and Business Media LLC},\n day = {8},\n id = {d1eabd57-034b-336e-82f4-9a1f8621cdfa},\n created = {2024-02-17T22:05:07.267Z},\n accessed = {2024-02-17},\n file_attached = {true},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2024-02-17T22:18:27.432Z},\n read = {true},\n starred = {false},\n authored = {true},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {Dimensionality reduction (DR) approaches are often a crucial step in data analysis tasks, particularly for data visualization purposes. DR-based techniques are essentially designed to retain the inherent structure of high-dimensional data in a lower-dimensional space, leading to reduced computational complexity and improved pattern recognition accuracy. Specifically, Kernel Principal Component Analysis (KPCA) is a widely utilized dimensionality reduction technique due to its capability to effectively handle nonlinear data sets. It offers an easily interpretable formulation from both geometric and functional analysis perspectives. However, Kernel PCA relies on free hyperparameters, which are usually tuned in advance. The relationship between these hyperparameters and the structure of the embedded space remains undisclosed. This work presents preliminary steps to explore said relationship by jointly evaluating the data classification and representation abilities. To do so, an interactive visualization framework is introduced. This study highlights the importance of creating interactive interfaces that enable interpretable dimensionality reduction approaches for data visualization and analysis.},\n bibtype = {article},\n author = {Aalaila, Yahya and Bachchar, Ismail and Raki, Hind and Bamansour, Sami and Elhamdi, Mouad and Benghzial, Kaoutar and Ortega-Bustamante, MacArthur and Guachi-Guachi, Lorena and Peluffo-Ordóñez, Diego H.},\n doi = {10.1007/S42979-023-02405-9/FIGURES/5},\n journal = {SN Computer Science},\n number = {1}\n}
\n
\n\n\n
\n Dimensionality reduction (DR) approaches are often a crucial step in data analysis tasks, particularly for data visualization purposes. DR-based techniques are essentially designed to retain the inherent structure of high-dimensional data in a lower-dimensional space, leading to reduced computational complexity and improved pattern recognition accuracy. Specifically, Kernel Principal Component Analysis (KPCA) is a widely utilized dimensionality reduction technique due to its capability to effectively handle nonlinear data sets. It offers an easily interpretable formulation from both geometric and functional analysis perspectives. However, Kernel PCA relies on free hyperparameters, which are usually tuned in advance. The relationship between these hyperparameters and the structure of the embedded space remains undisclosed. This work presents preliminary steps to explore said relationship by jointly evaluating the data classification and representation abilities. To do so, an interactive visualization framework is introduced. This study highlights the importance of creating interactive interfaces that enable interpretable dimensionality reduction approaches for data visualization and analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Quantitative ultrasound assessment of healthy and degenerated cartilage.\n \n \n \n \n\n\n \n Guachi-Guachi, L.; Sorriento, A.; Cafarelli, A.; Dolzani, P.; Lenzi, E.; Lisignoli, G.; and Ricotti, L.\n\n\n \n\n\n\n IEEE International Ultrasonics Symposium, IUS. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"QuantitativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Quantitative ultrasound assessment of healthy and degenerated cartilage},\n type = {article},\n year = {2023},\n keywords = {cartilage degeneration,osteoarthritis,quantitative ultrasound,radiofrequency data},\n publisher = {IEEE Computer Society},\n id = {c036dc30-1b62-3bf5-a855-57c248fc0d4e},\n created = {2024-02-17T22:07:25.833Z},\n accessed = {2024-02-17},\n file_attached = {true},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2024-02-17T22:18:27.168Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {Ultrasound (US) imaging represents a safe option for cartilage monitoring. However, at present, it is not established as an objective and quantitative evaluation method. In this work, we introduce a novel quantitative US assessment based on six radiofrequency (RF) metrics intended to discriminate between healthy and chemically degraded cartilage tissues. Bovine cartilage samples were degraded with collagenase (responsible of collagen network deterioration) and scanned with a 15 MHz US imaging probe, which allowed RF data access. Results revealed a significant reduction in the values of approximate entropy, root mean square, and sample entropy after collagenase treatment, paving the way for the use of these techniques as a future diagnostic tool for cartilage monitoring.},\n bibtype = {article},\n author = {Guachi-Guachi, Lorena and Sorriento, Angela and Cafarelli, Andrea and Dolzani, Paolo and Lenzi, Enrico and Lisignoli, Gina and Ricotti, Leonardo},\n doi = {10.1109/IUS51837.2023.10306623},\n journal = {IEEE International Ultrasonics Symposium, IUS}\n}
\n
\n\n\n
\n Ultrasound (US) imaging represents a safe option for cartilage monitoring. However, at present, it is not established as an objective and quantitative evaluation method. In this work, we introduce a novel quantitative US assessment based on six radiofrequency (RF) metrics intended to discriminate between healthy and chemically degraded cartilage tissues. Bovine cartilage samples were degraded with collagenase (responsible of collagen network deterioration) and scanned with a 15 MHz US imaging probe, which allowed RF data access. Results revealed a significant reduction in the values of approximate entropy, root mean square, and sample entropy after collagenase treatment, paving the way for the use of these techniques as a future diagnostic tool for cartilage monitoring.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n 3D Printing of Prototypes Starting from Medical Imaging: A Liver Case Study.\n \n \n \n \n\n\n \n Guachi, R.; Bici, M.; Bini, F.; Calispa, M., E.; Oscullo, C.; Guachi, L.; Campana, F.; and Marinozzi, F.\n\n\n \n\n\n\n pages 535-545. 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Website\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2022},\n pages = {535-545},\n websites = {https://link.springer.com/10.1007/978-3-030-91234-5_54},\n id = {c9d93c29-df99-3bcd-9ccb-c576ff17f99c},\n created = {2022-01-25T23:49:35.087Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2022-01-25T23:50:20.532Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Hepatic diseases are serious condition worldwide, and several times doctors analyse the situation and elaborates a preoperative planning based exclusively on the medical images, which are a drawback since they only provide a 2D vision and the location of the damaged tissues in the three-dimensional space cannot be easily determined by surgeons. Nowadays, with the advancement of Computer Aided Design (CAD) technologies and image segmentation, a digital liver model can be obtained to help understand the particular medical case; even with the geometric model, a virtual simulation can be elaborated. This work is divided into two phases; the first phase involves a workflow to create a liver geometrical model from medical images. Whereas the second phase provides a methodology to achieve liver prototype, using the technique of fused deposition modelling (FDM). The two stages determine and evaluate the most influencing parameters to make this design repeatable in different hepatic diseases. The reported case study provides a valuable method for optimizing preoperative plans for liver disease. In addition, the prototype built with additive manufacturing will allow the new doctors to speed up their learning curve, since they can manipulate the real geometry of the patient's liver with their hands.},\n bibtype = {inbook},\n author = {Guachi, Robinson and Bici, Michele and Bini, Fabiano and Calispa, Marcelo Esteban and Oscullo, Cristina and Guachi, Lorena and Campana, Francesca and Marinozzi, Franco},\n doi = {10.1007/978-3-030-91234-5_54},\n chapter = {3D Printing of Prototypes Starting from Medical Imaging: A Liver Case Study}\n}
\n
\n\n\n
\n Hepatic diseases are serious condition worldwide, and several times doctors analyse the situation and elaborates a preoperative planning based exclusively on the medical images, which are a drawback since they only provide a 2D vision and the location of the damaged tissues in the three-dimensional space cannot be easily determined by surgeons. Nowadays, with the advancement of Computer Aided Design (CAD) technologies and image segmentation, a digital liver model can be obtained to help understand the particular medical case; even with the geometric model, a virtual simulation can be elaborated. This work is divided into two phases; the first phase involves a workflow to create a liver geometrical model from medical images. Whereas the second phase provides a methodology to achieve liver prototype, using the technique of fused deposition modelling (FDM). The two stages determine and evaluate the most influencing parameters to make this design repeatable in different hepatic diseases. The reported case study provides a valuable method for optimizing preoperative plans for liver disease. In addition, the prototype built with additive manufacturing will allow the new doctors to speed up their learning curve, since they can manipulate the real geometry of the patient's liver with their hands.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Instance Selection on CNNs for Alzheimer’s Disease Classification from MRI.\n \n \n \n \n\n\n \n Castro-Silva, J.; Moreno-García, M.; Guachi-Guachi, L.; and Peluffo-Ordóñez, D.\n\n\n \n\n\n\n In Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods, pages 330-337, 2022. SCITEPRESS - Science and Technology Publications\n \n\n\n\n
\n\n\n\n \n \n \"InstanceWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Instance Selection on CNNs for Alzheimer’s Disease Classification from MRI},\n type = {inproceedings},\n year = {2022},\n pages = {330-337},\n websites = {https://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0010900100003122},\n publisher = {SCITEPRESS - Science and Technology Publications},\n id = {7c98eab1-1c88-3b4c-ba07-440bef957221},\n created = {2022-08-25T17:17:34.338Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2022-08-25T17:17:34.338Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {The selection of more informative instances from a dataset is an important preprocessing step that can be applied in many classification tasks. Since databases are becoming increasingly large, instance selection techniques have been used to reduce the data to a manageable size. Besides, the use of test data in any part of the training process, called data leakage, can produce a biased evaluation of classification algorithms. In this context, this work introduces an instance selection methodology to avoid data leakage using an early subject, volume, and slice dataset split, and a novel percentile-position-analysis method to identify the regions with the most informative instances. The proposed methodology includes four stages. First, 3D magnetic resonance images are prepared to extract 2D slices of all subjects and only one volume per subject. Second, the extracted 2D slices are evaluated in a percentile distribution fashion in order to select the most insightful 2D instances. Third, image preprocessing techniques are used to suppress noisy data, preserving semantic information in the image. Finally, the selected instances are used to generate the training, validation and test datasets. Preliminary tests are carried out referring to the OASIS-3 dataset to demonstrate the impact of the number of slices per subject, the preprocessing techniques, and the instance selection method on the overall performance of CNN-based classification models such as DenseNet121 and EfficientNetB0. The proposed methodology achieved a competitive overall accuracy at a slice level of about 77.01% in comparison to 76.94% reported by benchmark- and-recent works conducting experiments on the same dataset and focusing on instance selection approaches.},\n bibtype = {inproceedings},\n author = {Castro-Silva, J. and Moreno-García, M. and Guachi-Guachi, Lorena and Peluffo-Ordóñez, D.},\n doi = {10.5220/0010900100003122},\n booktitle = {Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods}\n}
\n
\n\n\n
\n The selection of more informative instances from a dataset is an important preprocessing step that can be applied in many classification tasks. Since databases are becoming increasingly large, instance selection techniques have been used to reduce the data to a manageable size. Besides, the use of test data in any part of the training process, called data leakage, can produce a biased evaluation of classification algorithms. In this context, this work introduces an instance selection methodology to avoid data leakage using an early subject, volume, and slice dataset split, and a novel percentile-position-analysis method to identify the regions with the most informative instances. The proposed methodology includes four stages. First, 3D magnetic resonance images are prepared to extract 2D slices of all subjects and only one volume per subject. Second, the extracted 2D slices are evaluated in a percentile distribution fashion in order to select the most insightful 2D instances. Third, image preprocessing techniques are used to suppress noisy data, preserving semantic information in the image. Finally, the selected instances are used to generate the training, validation and test datasets. Preliminary tests are carried out referring to the OASIS-3 dataset to demonstrate the impact of the number of slices per subject, the preprocessing techniques, and the instance selection method on the overall performance of CNN-based classification models such as DenseNet121 and EfficientNetB0. The proposed methodology achieved a competitive overall accuracy at a slice level of about 77.01% in comparison to 76.94% reported by benchmark- and-recent works conducting experiments on the same dataset and focusing on instance selection approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Crop Classification Using Deep Learning: A Quick Comparative Study of Modern Approaches.\n \n \n \n\n\n \n Raki, H.; González-Vergara, J.; Aalaila, Y.; Elhamdi, M.; Bamansour, S.; Guachi-Guachi, L.; and Peluffo-Ordoñez, D., H.\n\n\n \n\n\n\n In Communications in Computer and Information Science, 2022. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Crop Classification Using Deep Learning: A Quick Comparative Study of Modern Approaches},\n type = {inproceedings},\n year = {2022},\n keywords = {Convolutional neural networks,Deep learning,Smart farming},\n id = {8e0f5c93-88f4-3556-bc2e-b9de1e7b8c8e},\n created = {2023-05-10T21:14:07.167Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2023-05-10T21:14:07.167Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Automatic crop classification using new technologies is recognized as one of the most important assets in today’s smart farming improvement. Investments in technology and innovation are key issues for shaping agricultural productivity as well as the inclusiveness and sustainability of the global agricultural transformation. Digital image processing (DIP) has been widely adopted in this field, by merging Unmanned Aerial Vehicle (UAV) based remote sensing and deep learning (DL) as a powerful tool for crop classification. Despite the wide range of alternatives, the proper selection of a DL approach is still an open and challenging issue. In this work, we carry out an exhaustive performance evaluation of three remarkable and lightweight DL approaches, namely: Visual Geometry Group (VGG), Residual Neural Network (ResNet) and Inception V3, tested on high resolution agriculture crop images dataset. Experimental results show that InceptionV3 outperforms VGG and ResNet in terms of precision (0,92), accuracy (0,97), recall (0,91), AUC (0,98), PCR (0,97), and F1 (0,91).},\n bibtype = {inproceedings},\n author = {Raki, Hind and González-Vergara, Juan and Aalaila, Yahya and Elhamdi, Mouad and Bamansour, Sami and Guachi-Guachi, Lorena and Peluffo-Ordoñez, Diego H.},\n doi = {10.1007/978-3-031-19647-8_3},\n booktitle = {Communications in Computer and Information Science}\n}
\n
\n\n\n
\n Automatic crop classification using new technologies is recognized as one of the most important assets in today’s smart farming improvement. Investments in technology and innovation are key issues for shaping agricultural productivity as well as the inclusiveness and sustainability of the global agricultural transformation. Digital image processing (DIP) has been widely adopted in this field, by merging Unmanned Aerial Vehicle (UAV) based remote sensing and deep learning (DL) as a powerful tool for crop classification. Despite the wide range of alternatives, the proper selection of a DL approach is still an open and challenging issue. In this work, we carry out an exhaustive performance evaluation of three remarkable and lightweight DL approaches, namely: Visual Geometry Group (VGG), Residual Neural Network (ResNet) and Inception V3, tested on high resolution agriculture crop images dataset. Experimental results show that InceptionV3 outperforms VGG and ResNet in terms of precision (0,92), accuracy (0,97), recall (0,91), AUC (0,98), PCR (0,97), and F1 (0,91).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Genetic Algorithm for Scheduling Laboratory Rooms: A Case Study.\n \n \n \n\n\n \n Fuenmayor, R.; Larrea, M.; Moncayo, M.; Moya, E.; Trujillo, S.; Terneus, J., D.; Guachi, R.; Peluffo-Ordoñez, D., H.; and Guachi-Guachi, L.\n\n\n \n\n\n\n In Communications in Computer and Information Science, 2022. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {A Genetic Algorithm for Scheduling Laboratory Rooms: A Case Study},\n type = {inproceedings},\n year = {2022},\n keywords = {Genetic algorithms,Mutation,Scheduling optimization},\n id = {70dbdf56-d814-3fa9-a143-528aa58a27f3},\n created = {2023-05-10T21:14:07.171Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2023-05-10T21:14:07.171Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Genetic algorithms (GAs) are a great tool for solving optimization problems. Their characteristics and different components based on the principles of biological evolution make these algorithms very robust and efficient in this type of problem. Many research works have presented dedicated solutions to schedule or resource optimization problems in different areas and project types; most of them have adopted GA implementation to find an individual that represents the best solution. Under this conception, in this work, we present a GA with a controlled mutation operator aiming at maintaining a trade-off between diversity and survival of the best individuals of each generation. This modification is supported by an improvement in terms of convergence time, efficiency of the results and the fulfillment of the constraints (of 29%, 14.98% and 23.33% respectively, compared with state-of-the-art GA with a single random mutation operator) to solve the problem of schedule optimization in the use of three laboratory rooms of the Mechatronics Engineering Career of the International University of Ecuador.},\n bibtype = {inproceedings},\n author = {Fuenmayor, Rafael and Larrea, Martín and Moncayo, Mario and Moya, Esteban and Trujillo, Sebastián and Terneus, Juan Diego and Guachi, Robinson and Peluffo-Ordoñez, Diego H. and Guachi-Guachi, Lorena},\n doi = {10.1007/978-3-031-19647-8_1},\n booktitle = {Communications in Computer and Information Science}\n}
\n
\n\n\n
\n Genetic algorithms (GAs) are a great tool for solving optimization problems. Their characteristics and different components based on the principles of biological evolution make these algorithms very robust and efficient in this type of problem. Many research works have presented dedicated solutions to schedule or resource optimization problems in different areas and project types; most of them have adopted GA implementation to find an individual that represents the best solution. Under this conception, in this work, we present a GA with a controlled mutation operator aiming at maintaining a trade-off between diversity and survival of the best individuals of each generation. This modification is supported by an improvement in terms of convergence time, efficiency of the results and the fulfillment of the constraints (of 29%, 14.98% and 23.33% respectively, compared with state-of-the-art GA with a single random mutation operator) to solve the problem of schedule optimization in the use of three laboratory rooms of the Mechatronics Engineering Career of the International University of Ecuador.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Enhanced Convolutional-Neural-Network Architecture for Crop Classification.\n \n \n \n \n\n\n \n Moreno-Revelo, M., Y.; Guachi-Guachi, L.; Gómez-Mendoza, J., B.; Revelo-Fuelagán, J.; and Peluffo-Ordóñez, D., H.\n\n\n \n\n\n\n Applied Sciences, 11(9): 4292. 5 2021.\n \n\n\n\n
\n\n\n\n \n \n \"EnhancedPaper\n  \n \n \n \"EnhancedWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{\n title = {Enhanced Convolutional-Neural-Network Architecture for Crop Classification},\n type = {article},\n year = {2021},\n pages = {4292},\n volume = {11},\n websites = {https://www.mdpi.com/2076-3417/11/9/4292},\n month = {5},\n day = {10},\n id = {9487b147-3bbd-3e11-9691-f3c7d053d992},\n created = {2021-06-08T21:12:39.781Z},\n file_attached = {true},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2022-07-04T15:23:24.502Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Automatic crop identification and monitoring is a key element in enhancing food production processes as well as diminishing the related environmental impact. Although several efficient deep learning techniques have emerged in the field of multispectral imagery analysis, the crop classification problem still needs more accurate solutions. This work introduces a competitive methodology for crop classification from multispectral satellite imagery mainly using an enhanced 2D convolutional neural network (2D-CNN) designed at a smaller-scale architecture, as well as a novel post-processing step. The proposed methodology contains four steps: image stacking, patch extraction, classification model design (based on a 2D-CNN architecture), and post-processing. First, the images are stacked to increase the number of features. Second, the input images are split into patches and fed into the 2D-CNN model. Then, the 2D-CNN model is constructed within a small-scale framework, and properly trained to recognize 10 different types of crops. Finally, a post-processing step is performed in order to reduce the classification error caused by lower-spatial-resolution images. Experiments were carried over the so-named Campo Verde database, which consists of a set of satellite images captured by Landsat and Sentinel satellites from the municipality of Campo Verde, Brazil. In contrast to the maximum accuracy values reached by remarkable works reported in the literature (amounting to an overall accuracy of about 81%, a f1 score of 75.89%, and average accuracy of 73.35%), the proposed methodology achieves a competitive overall accuracy of 81.20%, a f1 score of 75.89%, and an average accuracy of 88.72% when classifying 10 different crops, while ensuring an adequate trade-off between the number of multiply-accumulate operations (MACs) and accuracy. Furthermore, given its ability to effectively classify patches from two image sequences, this methodology may result appealing for other real-world applications, such as the classification of urban materials.},\n bibtype = {article},\n author = {Moreno-Revelo, Mónica Y. and Guachi-Guachi, Lorena and Gómez-Mendoza, Juan Bernardo and Revelo-Fuelagán, Javier and Peluffo-Ordóñez, Diego H.},\n doi = {10.3390/app11094292},\n journal = {Applied Sciences},\n number = {9}\n}
\n
\n\n\n
\n Automatic crop identification and monitoring is a key element in enhancing food production processes as well as diminishing the related environmental impact. Although several efficient deep learning techniques have emerged in the field of multispectral imagery analysis, the crop classification problem still needs more accurate solutions. This work introduces a competitive methodology for crop classification from multispectral satellite imagery mainly using an enhanced 2D convolutional neural network (2D-CNN) designed at a smaller-scale architecture, as well as a novel post-processing step. The proposed methodology contains four steps: image stacking, patch extraction, classification model design (based on a 2D-CNN architecture), and post-processing. First, the images are stacked to increase the number of features. Second, the input images are split into patches and fed into the 2D-CNN model. Then, the 2D-CNN model is constructed within a small-scale framework, and properly trained to recognize 10 different types of crops. Finally, a post-processing step is performed in order to reduce the classification error caused by lower-spatial-resolution images. Experiments were carried over the so-named Campo Verde database, which consists of a set of satellite images captured by Landsat and Sentinel satellites from the municipality of Campo Verde, Brazil. In contrast to the maximum accuracy values reached by remarkable works reported in the literature (amounting to an overall accuracy of about 81%, a f1 score of 75.89%, and average accuracy of 73.35%), the proposed methodology achieves a competitive overall accuracy of 81.20%, a f1 score of 75.89%, and an average accuracy of 88.72% when classifying 10 different crops, while ensuring an adequate trade-off between the number of multiply-accumulate operations (MACs) and accuracy. Furthermore, given its ability to effectively classify patches from two image sequences, this methodology may result appealing for other real-world applications, such as the classification of urban materials.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Impact of Histogram Equalization and Color Mapping on ResNet-34's Overall Performance for COVID-19 Detection.\n \n \n \n \n\n\n \n David Freire, J.; Rodrigo Montenegro, J.; Andres Mejia, H.; Paul Guzman, F.; Enrique Bustamante, C.; Xavier Velastegui, R.; and De Los Angeles Guachi, L.\n\n\n \n\n\n\n In 2021 4th International Conference on Data Storage and Data Engineering, pages 45-51, 2 2021. ACM\n \n\n\n\n
\n\n\n\n \n \n \"TheWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {The Impact of Histogram Equalization and Color Mapping on ResNet-34's Overall Performance for COVID-19 Detection},\n type = {inproceedings},\n year = {2021},\n keywords = {COVID-19,Image pre-processing,ResNet-34},\n pages = {45-51},\n websites = {https://dl.acm.org/doi/10.1145/3456146.3456154},\n month = {2},\n publisher = {ACM},\n day = {18},\n city = {New York, NY, USA},\n id = {4ecc476b-ee19-3938-a1f6-0cea05f9eae9},\n created = {2021-09-26T10:58:25.243Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2022-01-25T23:50:20.607Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {The COVID-19 pandemic has had a "devastating"impact on public health and well-being around the world. Early diagnosis is a crucial step to begin treatment and prevent more infections. In this sense, early screening approaches have demonstrated that in chest radiology images, patients present abnormalities that distinguish COVID-19 cases. Recent studies based on Convolutional Neural Networks (CNNs), using radiology imaging techniques, have been proposed to assist in the accurate detection of COVID-19. Radiology images are characterized by the opacity produced by "ground glass"which might hide powerful information for feature analysis. Therefore, this work presents a methodology to assess the overall performance of Resnet-34, a deep CNN architecture, for COVID-19 detection when pre-processing histogram equalization and color mapping are applied to chest X-ray images. Besides, to enrich the available images related to COVID-19 studies, data augmentation techniques were also carried out. Experimental results reach the highest precision and sensitivity when applying global histogram equalization and pink color mapping. This study provides a point-of-view based on accuracy metrics to choose pre-processing techniques that can improve CNNs performance for radiology image classification purposes.},\n bibtype = {inproceedings},\n author = {David Freire, Jonathan and Rodrigo Montenegro, Jordan and Andres Mejia, Hector and Paul Guzman, Franz and Enrique Bustamante, Carlos and Xavier Velastegui, Ronny and De Los Angeles Guachi, Lorena},\n doi = {10.1145/3456146.3456154},\n booktitle = {2021 4th International Conference on Data Storage and Data Engineering}\n}
\n
\n\n\n
\n The COVID-19 pandemic has had a \"devastating\"impact on public health and well-being around the world. Early diagnosis is a crucial step to begin treatment and prevent more infections. In this sense, early screening approaches have demonstrated that in chest radiology images, patients present abnormalities that distinguish COVID-19 cases. Recent studies based on Convolutional Neural Networks (CNNs), using radiology imaging techniques, have been proposed to assist in the accurate detection of COVID-19. Radiology images are characterized by the opacity produced by \"ground glass\"which might hide powerful information for feature analysis. Therefore, this work presents a methodology to assess the overall performance of Resnet-34, a deep CNN architecture, for COVID-19 detection when pre-processing histogram equalization and color mapping are applied to chest X-ray images. Besides, to enrich the available images related to COVID-19 studies, data augmentation techniques were also carried out. Experimental results reach the highest precision and sensitivity when applying global histogram equalization and pink color mapping. This study provides a point-of-view based on accuracy metrics to choose pre-processing techniques that can improve CNNs performance for radiology image classification purposes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Chatterbot Based on Genetic Algorithm: Preliminary Results.\n \n \n \n \n\n\n \n Orellana, C.; Tobar, M.; Yazán, J.; Peluffo-Ordóñez, D.; and Guachi-Guachi, L.\n\n\n \n\n\n\n pages 3-12. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Website\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2021},\n pages = {3-12},\n websites = {https://link.springer.com/10.1007/978-3-030-89654-6_1},\n id = {1093dc60-5479-3845-8b51-f79a669e6b74},\n created = {2021-11-25T20:49:59.280Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2022-01-25T23:50:20.525Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Chatterbots are programs that simulate an intelligent conversation with people. They are commonly used in customer service, product suggestions, e-commerce, travel and vacations, queries, and complaints. Although some works have presented valuable studies by using several technologies including evolutionary computing, artificial intelligence, machine learning, and natural language processing, creating chatterbots with a low rate of grammatical errors and good user satisfaction is still a challenging task. Therefore, this work introduces a preliminary study for the development of a GA-based chatterbot that generates intelligent dialogues with a low rate of grammatical errors and a strong sense of responsiveness, so boosting the personals satisfaction of individuals who interact with it. Preliminary results show that the proposed GA-based chatterbot yields 69% of “Good” responses for typical conversations regarding orders and receipts in a cafeteria.},\n bibtype = {inbook},\n author = {Orellana, Cristian and Tobar, Martín and Yazán, Jeremy and Peluffo-Ordóñez, D. and Guachi-Guachi, Lorena},\n doi = {10.1007/978-3-030-89654-6_1},\n chapter = {A Chatterbot Based on Genetic Algorithm: Preliminary Results}\n}
\n
\n\n\n
\n Chatterbots are programs that simulate an intelligent conversation with people. They are commonly used in customer service, product suggestions, e-commerce, travel and vacations, queries, and complaints. Although some works have presented valuable studies by using several technologies including evolutionary computing, artificial intelligence, machine learning, and natural language processing, creating chatterbots with a low rate of grammatical errors and good user satisfaction is still a challenging task. Therefore, this work introduces a preliminary study for the development of a GA-based chatterbot that generates intelligent dialogues with a low rate of grammatical errors and a strong sense of responsiveness, so boosting the personals satisfaction of individuals who interact with it. Preliminary results show that the proposed GA-based chatterbot yields 69% of “Good” responses for typical conversations regarding orders and receipts in a cafeteria.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of Current Deep Convolutional Neural Networks for the Segmentation of Breast Masses in Mammograms.\n \n \n \n \n\n\n \n Anaya-Isaza, A.; Mera-Jimenez, L.; Cabrera-Chavarro, J., M.; Guachi-Guachi, L.; Peluffo-Ordonez, D.; and Rios-Patino, J., I.\n\n\n \n\n\n\n IEEE Access, 9: 152206-152225. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"ComparisonPaper\n  \n \n \n \"ComparisonWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Comparison of Current Deep Convolutional Neural Networks for the Segmentation of Breast Masses in Mammograms},\n type = {article},\n year = {2021},\n keywords = {Artificial intelligence,biomedical imaging,cancer,image segmentation,machine learning,mammography,medical diagnostic imaging},\n pages = {152206-152225},\n volume = {9},\n websites = {https://ieeexplore.ieee.org/document/9614200/},\n id = {6bbac060-f47f-37ff-bbaa-25a806b2adff},\n created = {2022-01-25T23:49:34.938Z},\n file_attached = {true},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2022-07-04T15:23:36.238Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n folder_uuids = {45c7d38c-9f85-4de8-9c17-14a6e3e3779a},\n private_publication = {false},\n abstract = {Breast cancer causes approximately 684,996 deaths worldwide, making it the leading cause of female cancer mortality. However, these figures can be reduced with early diagnosis through mammographic imaging, allowing for the timely and effective treatment of this disease. To establish the best tools for contributing to the automatic diagnosis of breast cancer, different deep learning (DL) architectures were compared in terms of breast lesion segmentation, lesion type classification, and degree of suspicion of malignancy tests. The tasks were completed with state-of-the-art architectures and backbones. Initially, during segmentation, the base UNet, Visual Geometry Group 19 (VGG19), InceptionResNetV2, EfficientNet, MobileNetv2, ResNet, ResNeXt, MultiResUNet, linkNet-VGG19, DenseNet, SEResNet and SeResNeXt architectures were compared, where 'Res' denotes a residual network. In addition, training was performed with 5 of the most advanced loss functions and validated by the Dice coefficient, sensitivity, and specificity. The proposed models achieved Dice values above 90%, with the EfficientNet architecture achieving 94.75% and 99% accuracy on the two tasks. Subsequently, classification was addressed with the ResNet50V2, VGG19, InceptionResNetV2, DenseNet121, InceptionV3, Xception and EfficientNetB7 networks. The proposed models achieved 96.97% and 97.73% accuracy through the VGG19 and ResNet50V2 networks on the lesion classification and degree of suspicion tasks, respectively. All three tasks were addressed with open-access databases, including the Digital Database for Screening Mammography (DDSM), the Mammographic Image Analysis Society (MIAS) database, and INbreast.},\n bibtype = {article},\n author = {Anaya-Isaza, Andres and Mera-Jimenez, Leonel and Cabrera-Chavarro, Johan Manuel and Guachi-Guachi, Lorena and Peluffo-Ordonez, Diego and Rios-Patino, Jorge Ivan},\n doi = {10.1109/ACCESS.2021.3127862},\n journal = {IEEE Access}\n}
\n
\n\n\n
\n Breast cancer causes approximately 684,996 deaths worldwide, making it the leading cause of female cancer mortality. However, these figures can be reduced with early diagnosis through mammographic imaging, allowing for the timely and effective treatment of this disease. To establish the best tools for contributing to the automatic diagnosis of breast cancer, different deep learning (DL) architectures were compared in terms of breast lesion segmentation, lesion type classification, and degree of suspicion of malignancy tests. The tasks were completed with state-of-the-art architectures and backbones. Initially, during segmentation, the base UNet, Visual Geometry Group 19 (VGG19), InceptionResNetV2, EfficientNet, MobileNetv2, ResNet, ResNeXt, MultiResUNet, linkNet-VGG19, DenseNet, SEResNet and SeResNeXt architectures were compared, where 'Res' denotes a residual network. In addition, training was performed with 5 of the most advanced loss functions and validated by the Dice coefficient, sensitivity, and specificity. The proposed models achieved Dice values above 90%, with the EfficientNet architecture achieving 94.75% and 99% accuracy on the two tasks. Subsequently, classification was addressed with the ResNet50V2, VGG19, InceptionResNetV2, DenseNet121, InceptionV3, Xception and EfficientNetB7 networks. The proposed models achieved 96.97% and 97.73% accuracy through the VGG19 and ResNet50V2 networks on the lesion classification and degree of suspicion tasks, respectively. All three tasks were addressed with open-access databases, including the Digital Database for Screening Mammography (DDSM), the Mammographic Image Analysis Society (MIAS) database, and INbreast.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Biomechanics of Soft Tissues: The Role of the Mathematical Model on Material Behavior.\n \n \n \n \n\n\n \n Bustamante-Orellana, C.; Guachi, R.; Guachi-Guachi, L.; Novelli, S.; Campana, F.; Bini, F.; and Marinozzi, F.\n\n\n \n\n\n\n Advances in Intelligent Systems and Computing, pages 301-311. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"AdvancesWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2020},\n keywords = {FEA,Hyperelastic mathematical models,Soft tissues behavior},\n pages = {301-311},\n websites = {http://link.springer.com/10.1007/978-3-030-32022-5_29},\n id = {2ed4b5a8-4fe4-3b9c-97bf-62dca74f8347},\n created = {2020-12-30T02:11:39.964Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:39.964Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Bustamante-Orellana2020},\n source_type = {incollection},\n private_publication = {false},\n abstract = {Mechanical properties of the soft tissues and an accurate mathematical model are important to reproduce the soft tissue's material behavior (mechanical behavior) in a virtual simulation. This type of simulations by Finite Element Analysis (FEA) is required to analyze injury mechanisms, vehicle accidents, airplane ejections, blast-related events, surgical procedures simulation and to develop and test surgical implants where is mandatory take into account the high strain-rate. This work aims to highlight the role of the hyperelastic models, which can be used to simulate the highly nonlinear mechanical behavior of soft tissues. After a description of a set of formulations that can be defined as phenomenological models, a comparison between two models is discussed according to case study that represents a process of tissues clamping.},\n bibtype = {inbook},\n author = {Bustamante-Orellana, Carlos and Guachi, Robinson and Guachi-Guachi, Lorena and Novelli, Simone and Campana, Francesca and Bini, Fabiano and Marinozzi, Franco},\n doi = {10.1007/978-3-030-32022-5_29},\n chapter = {Biomechanics of Soft Tissues: The Role of the Mathematical Model on Material Behavior},\n title = {Advances in Intelligent Systems and Computing}\n}
\n
\n\n\n
\n Mechanical properties of the soft tissues and an accurate mathematical model are important to reproduce the soft tissue's material behavior (mechanical behavior) in a virtual simulation. This type of simulations by Finite Element Analysis (FEA) is required to analyze injury mechanisms, vehicle accidents, airplane ejections, blast-related events, surgical procedures simulation and to develop and test surgical implants where is mandatory take into account the high strain-rate. This work aims to highlight the role of the hyperelastic models, which can be used to simulate the highly nonlinear mechanical behavior of soft tissues. After a description of a set of formulations that can be defined as phenomenological models, a comparison between two models is discussed according to case study that represents a process of tissues clamping.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Finite element analysis in colorectal surgery: non-linear effects induced by material model and geometry.\n \n \n \n \n\n\n \n Guachi, R.; Bini, F.; Bici, M.; Campana, F.; Marinozzi, F.; and Guachi, L.\n\n\n \n\n\n\n Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 8(2): 219-230. 3 2020.\n \n\n\n\n
\n\n\n\n \n \n \"FiniteWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Finite element analysis in colorectal surgery: non-linear effects induced by material model and geometry},\n type = {article},\n year = {2020},\n keywords = {Finite element analysis,computer assisted surgical planning,segmentation of medical images,soft tissues simulation,surface modelling},\n pages = {219-230},\n volume = {8},\n websites = {https://www.tandfonline.com/doi/full/10.1080/21681163.2019.1679669},\n month = {3},\n id = {e996d322-d03f-3b1e-855d-7fe372ef519d},\n created = {2020-12-30T02:11:39.992Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2021-01-22T23:20:38.265Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Guachi2020},\n source_type = {article},\n private_publication = {false},\n abstract = {The use of continuum mechanics, especially Finite Element Analysis (FEA) has gained an extensive application in the medical field, in order to simulate soft tissues. In particular, colorectal simulations can be used to understand the interaction between colon and the surrounding tissues, and also, between colon and surgical instruments. Although several works have been introduced considering small displacements, FEA applied to colorectal surgical scenarios with large displacements is still a challenge. This work aims to investigate how FEA can describe non-linear effects induced by material properties and different approximating geometries for colon. More in detail, it shows a comparison between simulations that are performed using well-known hyperelastic models (principally Mooney-Rivlin and, in one case, Yeoh) and the linear one. These different mechanical behaviours are applied on different geometrical models (planar, cylindrical and a 3D-shape from digital acquisitions) with the aim of evaluating also the effects of geometric non-linearity. Increasing the displacements imposed by the surgical instruments, the adoption of a hyperelastic model shows lower stresses than the linear elastic one that seems to overestimate the averaged stress. Moreover, the details of the geometrical models affect the results in terms of stress-strain distribution, since it provides a better localisation of the effects related to the hypothesis of large strains.},\n bibtype = {article},\n author = {Guachi, Robinson and Bini, Fabiano and Bici, Michele and Campana, Francesca and Marinozzi, Franco and Guachi, Lorena},\n doi = {10.1080/21681163.2019.1679669},\n journal = {Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization},\n number = {2}\n}
\n
\n\n\n
\n The use of continuum mechanics, especially Finite Element Analysis (FEA) has gained an extensive application in the medical field, in order to simulate soft tissues. In particular, colorectal simulations can be used to understand the interaction between colon and the surrounding tissues, and also, between colon and surgical instruments. Although several works have been introduced considering small displacements, FEA applied to colorectal surgical scenarios with large displacements is still a challenge. This work aims to investigate how FEA can describe non-linear effects induced by material properties and different approximating geometries for colon. More in detail, it shows a comparison between simulations that are performed using well-known hyperelastic models (principally Mooney-Rivlin and, in one case, Yeoh) and the linear one. These different mechanical behaviours are applied on different geometrical models (planar, cylindrical and a 3D-shape from digital acquisitions) with the aim of evaluating also the effects of geometric non-linearity. Increasing the displacements imposed by the surgical instruments, the adoption of a hyperelastic model shows lower stresses than the linear elastic one that seems to overestimate the averaged stress. Moreover, the details of the geometrical models affect the results in terms of stress-strain distribution, since it provides a better localisation of the effects related to the hypothesis of large strains.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pre- and Post-processing on Generative Adversarial Networks for Old Photos Restoration: A Case Study.\n \n \n \n \n\n\n \n Paspuel, R.; Barba, M.; Jami, B.; and Guachi-Guachi, L.\n\n\n \n\n\n\n pages 194-201. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Website\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2020},\n pages = {194-201},\n websites = {http://link.springer.com/10.1007/978-3-030-62365-4_19},\n id = {a62372f2-7ac2-36e1-b535-35fa712d3e25},\n created = {2020-12-30T02:11:40.467Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.467Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Paspuel2020},\n source_type = {incollection},\n private_publication = {false},\n bibtype = {inbook},\n author = {Paspuel, Robinson and Barba, Marcelo and Jami, Bryan and Guachi-Guachi, Lorena},\n doi = {10.1007/978-3-030-62365-4_19},\n chapter = {Pre- and Post-processing on Generative Adversarial Networks for Old Photos Restoration: A Case Study}\n}
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pre-processing and Handling Unbalanced Data in CNN for Improving Automated Detection of COVID-19 Cases: Preliminary Results.\n \n \n \n \n\n\n \n Mejia, H.; Guzman, F.; Bustamante-Orellana, C.; and Guachi-Guachi, L.\n\n\n \n\n\n\n pages 129-139. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Website\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2020},\n pages = {129-139},\n websites = {http://link.springer.com/10.1007/978-3-030-62833-8_11},\n id = {ae609ca2-ce1e-33b7-89f7-b7c3d77d1fda},\n created = {2020-12-30T02:11:40.732Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.732Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Mejia2020},\n source_type = {incollection},\n private_publication = {false},\n bibtype = {inbook},\n author = {Mejia, Hector and Guzman, Franz and Bustamante-Orellana, Carlos and Guachi-Guachi, Lorena},\n doi = {10.1007/978-3-030-62833-8_11},\n chapter = {Pre-processing and Handling Unbalanced Data in CNN for Improving Automated Detection of COVID-19 Cases: Preliminary Results}\n}
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convolutional Neural Networks for Automatic Classification of Diseased Leaves: The Impact of Dataset Size and Fine-Tuning.\n \n \n \n \n\n\n \n Caluña, G.; Guachi-Guachi, L.; and Brito, R.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 951-966. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2020},\n keywords = {Convolutional Neural Networks (CNNs),Image classification,Leaf diseases classification},\n pages = {951-966},\n websites = {http://link.springer.com/10.1007/978-3-030-58799-4_68},\n id = {77b6d4b0-2a38-3b85-955c-c6cdd37a0c7f},\n created = {2020-12-30T02:11:40.735Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.735Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Caluna2020},\n source_type = {incollection},\n private_publication = {false},\n abstract = {For agricultural productivity, one of the major concerns is the early detection of diseases for their crops. Recently, some researchers have begun to explore Convolutional Neural Networks (CNNs) in agricultural field for leaves diseases identification. A CNN is a category of deep artificial neural networks that has demonstrated great success in computer vision applications, such as video and image analysis. However, their drawbacks are the demand of huge quantity of data with a wide range of conditions, as well as a carefully fine-tuning to work properly. This work explores and compares the most outstanding five CNNs architectures to determine their ability to correctly classify a leaf image as healthy and unhealthy. Experimental tests are performed referring to an unbalanced and small dataset composed by healthy and diseased leaves. In order to achieve a high accuracy on the explored CNN models, a fine-tuning of their hyperparameters is performed. Furthermore, some variations are done on the raw dataset to increase the quality and variety of the leaves images. Preliminary results provide a point-of-view for selecting CNNs architectures for leaves diseases identification based on accuracy, precision, recall and F1 metrics. The comparison demonstrates that without considerably lengthening the training, ZFNet achieves a high accuracy and increases it by 10% after 50 K iterations being a suitable CNN model for identification of diseased leaves using datasets with a small variation, number of classes and dataset sizes.},\n bibtype = {inbook},\n author = {Caluña, Giovanny and Guachi-Guachi, Lorena and Brito, Ramiro},\n doi = {10.1007/978-3-030-58799-4_68},\n chapter = {Convolutional Neural Networks for Automatic Classification of Diseased Leaves: The Impact of Dataset Size and Fine-Tuning},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n For agricultural productivity, one of the major concerns is the early detection of diseases for their crops. Recently, some researchers have begun to explore Convolutional Neural Networks (CNNs) in agricultural field for leaves diseases identification. A CNN is a category of deep artificial neural networks that has demonstrated great success in computer vision applications, such as video and image analysis. However, their drawbacks are the demand of huge quantity of data with a wide range of conditions, as well as a carefully fine-tuning to work properly. This work explores and compares the most outstanding five CNNs architectures to determine their ability to correctly classify a leaf image as healthy and unhealthy. Experimental tests are performed referring to an unbalanced and small dataset composed by healthy and diseased leaves. In order to achieve a high accuracy on the explored CNN models, a fine-tuning of their hyperparameters is performed. Furthermore, some variations are done on the raw dataset to increase the quality and variety of the leaves images. Preliminary results provide a point-of-view for selecting CNNs architectures for leaves diseases identification based on accuracy, precision, recall and F1 metrics. The comparison demonstrates that without considerably lengthening the training, ZFNet achieves a high accuracy and increases it by 10% after 50 K iterations being a suitable CNN model for identification of diseased leaves using datasets with a small variation, number of classes and dataset sizes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning Style Identification by CHAEA Junior Questionnaire and Artificial Neural Network Method: A Case Study.\n \n \n \n \n\n\n \n Torres-Molina, R.; Guachi-Guachi, L.; Guachi, R.; Stefania, P.; and Ortega-Zamorano, F.\n\n\n \n\n\n\n Advances in Intelligent Systems and Computing, pages 326-336. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"AdvancesWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2020},\n keywords = {Artificial Neural Network,Automatic recognition,Learning Style},\n pages = {326-336},\n websites = {http://link.springer.com/10.1007/978-3-030-32033-1_30},\n id = {182219c3-d2d7-3880-834c-caf10fc18d97},\n created = {2020-12-30T02:11:40.739Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.739Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Torres-Molina2020},\n source_type = {incollection},\n private_publication = {false},\n abstract = {By the lack of personalization in education, students obtain low performance in different subjects in school, particularly in mathematics. Therefore, learning style identification is a crucial tool to improve academic performance. Although traditional methods such questionnaires have been extensively used to the learning styles detection in youths and adults by its high precision, it produces boredom in children and does not allow to adjust learning automatically to student characteristics and preferences over time. In this paper, two methods for learning style recognition: CHAEA-Junior questionnaire (static method) and Artificial Neural Networks (automatic method) are explored. The data for the second technique used answers from the survey and the percentage scores from mathematical mini-games (Competitor, Dreamer, Logician, Strategist) based on Kolb's learning theory. To the validity between both methods, it was conducted a pilot study with primary level students in Ecuador. The experimental tests show that Artificial Neural Networks are a suitable alternative to accurate models for automatic learning recognition to provide personalized learning to Ecuadorian students, which achieved close detection results concerning CHAEA-Junior questionnaire results.},\n bibtype = {inbook},\n author = {Torres-Molina, Richard and Guachi-Guachi, Lorena and Guachi, Robinson and Stefania, Perri and Ortega-Zamorano, Francisco},\n doi = {10.1007/978-3-030-32033-1_30},\n chapter = {Learning Style Identification by CHAEA Junior Questionnaire and Artificial Neural Network Method: A Case Study},\n title = {Advances in Intelligent Systems and Computing}\n}
\n
\n\n\n
\n By the lack of personalization in education, students obtain low performance in different subjects in school, particularly in mathematics. Therefore, learning style identification is a crucial tool to improve academic performance. Although traditional methods such questionnaires have been extensively used to the learning styles detection in youths and adults by its high precision, it produces boredom in children and does not allow to adjust learning automatically to student characteristics and preferences over time. In this paper, two methods for learning style recognition: CHAEA-Junior questionnaire (static method) and Artificial Neural Networks (automatic method) are explored. The data for the second technique used answers from the survey and the percentage scores from mathematical mini-games (Competitor, Dreamer, Logician, Strategist) based on Kolb's learning theory. To the validity between both methods, it was conducted a pilot study with primary level students in Ecuador. The experimental tests show that Artificial Neural Networks are a suitable alternative to accurate models for automatic learning recognition to provide personalized learning to Ecuadorian students, which achieved close detection results concerning CHAEA-Junior questionnaire results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Comparative study of distance measures for the fuzzy C-means and K-means non-supervised methods applied to image segmentation.\n \n \n \n\n\n \n Vélez-Falconí, M.; Marín, J.; Jiménez, S.; and Guachi-Guachi, L.\n\n\n \n\n\n\n In CEUR Workshop Proceedings, 2020. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Comparative study of distance measures for the fuzzy C-means and K-means non-supervised methods applied to image segmentation},\n type = {inproceedings},\n year = {2020},\n keywords = {Clustering,Image segmentation,Non-supervised algorithms},\n id = {a70ad28b-bae3-30a4-a44d-9fe0e835abac},\n created = {2022-01-25T23:49:35.121Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2022-01-25T23:49:35.121Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n private_publication = {false},\n abstract = {Recent studies have revealed that the performance of the FCM and K-means is completely related to the distance measures. However, the literature does not provide evidence that the distance used for data-clustering is useful for image segmentation. Therefore, a comparative study of the performance of different distance measures applied to image segmentation, using the mentioned clustering methods is proposed in this work. The selection of the distance measures was based on a literature study of their benefits. As a consequence, the selected distances to be tested are Euclidean, Manhattan, Canberra, and Spearman. Since our principal goal is to compare the effectiveness of the distance, the experiment had been evaluated according to two centroids selected by the user. According to primary results, the best-rated distance employed for image segmentation is the Canberra distance.},\n bibtype = {inproceedings},\n author = {Vélez-Falconí, Martín and Marín, Josué and Jiménez, Selena and Guachi-Guachi, Lorena},\n booktitle = {CEUR Workshop Proceedings}\n}
\n
\n\n\n
\n Recent studies have revealed that the performance of the FCM and K-means is completely related to the distance measures. However, the literature does not provide evidence that the distance used for data-clustering is useful for image segmentation. Therefore, a comparative study of the performance of different distance measures applied to image segmentation, using the mentioned clustering methods is proposed in this work. The selection of the distance measures was based on a literature study of their benefits. As a consequence, the selected distances to be tested are Euclidean, Manhattan, Canberra, and Spearman. Since our principal goal is to compare the effectiveness of the distance, the experiment had been evaluated according to two centroids selected by the user. According to primary results, the best-rated distance employed for image segmentation is the Canberra distance.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Automatic Microstructural Classification with Convolutional Neural Network.\n \n \n \n \n\n\n \n Lorena, G.; Robinson, G.; Stefania, P.; Pasquale, C.; Fabiano, B.; and Franco, M.\n\n\n \n\n\n\n Advances in Intelligent Systems and Computing, pages 170-181. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"AdvancesWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2019},\n keywords = {CNN,Image processing,Microstructure characterization},\n pages = {170-181},\n websites = {http://link.springer.com/10.1007/978-3-030-02828-2_13},\n id = {c9432205-5ed1-3c7d-963b-03b5a43ef939},\n created = {2020-12-30T02:11:40.368Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.368Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Lorena2019},\n source_type = {incollection},\n private_publication = {false},\n abstract = {Microstructural characterization allows knowing the components of a microstructure in order to determine the influence on mechanical properties, such as the maximum load that a body can support before breaking out. In almost all real solutions, microstructures are characterized by human experts, and its automatic identification is still a challenge. In fact, a microstructure typically is a combination of different constituents, also called phases, which produce complex substructures that store information related to origin and formation mode of a material defining all its physical and chemical properties. Convolutional neural networks (CNNs) are a category of deep artificial neural networks that show great success in computer vision applications, such as image and video recognition. In this work we explore and compare four outstanding CNNs architectures with increasing depth to analyze their capability of classifying correctly microstructural images into seven classes. Experiments are done referring to ultrahigh carbon steel microstructural images. As the main result, this paper provides a point-of-view to choose CNN architectures for microstructural image identification considering accuracy, training time, and the number of multiply and accumulate operations performed by convolutional layers. The comparison demonstrates that the addition of two convolutional layers in the LeNet network leads to a higher accuracy without considerably lengthening the training.},\n bibtype = {inbook},\n author = {Lorena, Guachi and Robinson, Guachi and Stefania, Perri and Pasquale, Corsonello and Fabiano, Bini and Franco, Marinozzi},\n doi = {10.1007/978-3-030-02828-2_13},\n chapter = {Automatic Microstructural Classification with Convolutional Neural Network},\n title = {Advances in Intelligent Systems and Computing}\n}
\n
\n\n\n
\n Microstructural characterization allows knowing the components of a microstructure in order to determine the influence on mechanical properties, such as the maximum load that a body can support before breaking out. In almost all real solutions, microstructures are characterized by human experts, and its automatic identification is still a challenge. In fact, a microstructure typically is a combination of different constituents, also called phases, which produce complex substructures that store information related to origin and formation mode of a material defining all its physical and chemical properties. Convolutional neural networks (CNNs) are a category of deep artificial neural networks that show great success in computer vision applications, such as image and video recognition. In this work we explore and compare four outstanding CNNs architectures with increasing depth to analyze their capability of classifying correctly microstructural images into seven classes. Experiments are done referring to ultrahigh carbon steel microstructural images. As the main result, this paper provides a point-of-view to choose CNN architectures for microstructural image identification considering accuracy, training time, and the number of multiply and accumulate operations performed by convolutional layers. The comparison demonstrates that the addition of two convolutional layers in the LeNet network leads to a higher accuracy without considerably lengthening the training.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Brain Tumor Classification Using Principal Component Analysis and Kernel Support Vector Machine.\n \n \n \n \n\n\n \n Torres-Molina, R.; Bustamante-Orellana, C.; Riofrío-Valdivieso, A.; Quinga-Socasi, F.; Guachi, R.; and Guachi-Guachi, L.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 89-96. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2019},\n keywords = {Brain tumor classification,Image processing,KSVM},\n pages = {89-96},\n websites = {http://link.springer.com/10.1007/978-3-030-33617-2_10},\n id = {a793b3e7-6dbc-37a9-913f-1056004fcb02},\n created = {2020-12-30T02:11:40.381Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.381Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Torres-Molina2019a},\n source_type = {incollection},\n private_publication = {false},\n abstract = {Early diagnosis improves cancer outcomes by giving care at the most initial possible stage and is, therefore, an important health strategy in all settings. Gliomas, meningiomas, and pituitary tumors are among the most common brain tumors in adults. This paper classifies these three types of brain tumors from patients; using a Kernel Support Vector Machine (KSVM) classifier. The images are pre-processed, and its dimensionality is reduced before entering the classifier, and the difference in accuracy produced by using or not pre-processing techniques is compared, as well as, the use of three different kernels, namely linear, quadratic, and Gaussian Radial Basis (GRB) for the classifier. The experimental results showed that the proposed approach with pre-processed MRI images by using GRB kernel achieves better performance than quadratic and linear kernels in terms of accuracy, precision, and specificity.},\n bibtype = {inbook},\n author = {Torres-Molina, Richard and Bustamante-Orellana, Carlos and Riofrío-Valdivieso, Andrés and Quinga-Socasi, Francisco and Guachi, Robinson and Guachi-Guachi, Lorena},\n doi = {10.1007/978-3-030-33617-2_10},\n chapter = {Brain Tumor Classification Using Principal Component Analysis and Kernel Support Vector Machine},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n Early diagnosis improves cancer outcomes by giving care at the most initial possible stage and is, therefore, an important health strategy in all settings. Gliomas, meningiomas, and pituitary tumors are among the most common brain tumors in adults. This paper classifies these three types of brain tumors from patients; using a Kernel Support Vector Machine (KSVM) classifier. The images are pre-processed, and its dimensionality is reduced before entering the classifier, and the difference in accuracy produced by using or not pre-processing techniques is compared, as well as, the use of three different kernels, namely linear, quadratic, and Gaussian Radial Basis (GRB) for the classifier. The experimental results showed that the proposed approach with pre-processed MRI images by using GRB kernel achieves better performance than quadratic and linear kernels in terms of accuracy, precision, and specificity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Artificial Neural Networks in Mathematical Mini-Games for Automatic Students' Learning Styles Identification: A First Approach.\n \n \n \n \n\n\n \n Torres-Molina, R.; Banda-Almeida, J.; and Guachi-Guachi, L.\n\n\n \n\n\n\n Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pages 53-60. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"LectureWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inbook{\n type = {inbook},\n year = {2019},\n keywords = {Artificial Neural Networks,Learning Styles,Video games},\n pages = {53-60},\n websites = {http://link.springer.com/10.1007/978-3-030-33617-2_6},\n id = {e2e0cd90-e498-3651-a7c6-c0f5703d1bda},\n created = {2020-12-30T02:11:40.446Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.446Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Torres-Molina2019},\n source_type = {incollection},\n private_publication = {false},\n abstract = {The lack of customized education results in low performance in different subjects as mathematics. Recognizing and knowing student learning styles will enable educators to create an appropriate learning environment. Questionnaires are traditional methods to identify the learning styles of the students. Nevertheless, they exhibit several limitations such as misunderstanding of the questions and boredom in children. Thus, this work proposes a first automatic approach to detect the learning styles (Activist, Reflector, Theorist, Pragmatist) based on Honey and Mumford theory through the use of Artificial Neural Networks in mathematical Mini-Games. Metrics from the mathematical Mini-Games as score and time were used as input data to then train the Artificial Neural Networks to predict the percentages of learning styles. The data gathered in this work was from a pilot study of Ecuadorian students with ages between 9 and 10 years old. The preliminary results show that the average overall difference between the two techniques (Artificial Neural Networks and CHAEA-Junior) is 4.13%. Finally, we conclude that video games can be fun and suitable tools for an accurate prediction of learning styles.},\n bibtype = {inbook},\n author = {Torres-Molina, Richard and Banda-Almeida, Jorge and Guachi-Guachi, Lorena},\n doi = {10.1007/978-3-030-33617-2_6},\n chapter = {Artificial Neural Networks in Mathematical Mini-Games for Automatic Students' Learning Styles Identification: A First Approach},\n title = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}\n}
\n
\n\n\n
\n The lack of customized education results in low performance in different subjects as mathematics. Recognizing and knowing student learning styles will enable educators to create an appropriate learning environment. Questionnaires are traditional methods to identify the learning styles of the students. Nevertheless, they exhibit several limitations such as misunderstanding of the questions and boredom in children. Thus, this work proposes a first automatic approach to detect the learning styles (Activist, Reflector, Theorist, Pragmatist) based on Honey and Mumford theory through the use of Artificial Neural Networks in mathematical Mini-Games. Metrics from the mathematical Mini-Games as score and time were used as input data to then train the Artificial Neural Networks to predict the percentages of learning styles. The data gathered in this work was from a pilot study of Ecuadorian students with ages between 9 and 10 years old. The preliminary results show that the average overall difference between the two techniques (Artificial Neural Networks and CHAEA-Junior) is 4.13%. Finally, we conclude that video games can be fun and suitable tools for an accurate prediction of learning styles.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multimodal background subtraction for high-performance embedded systems.\n \n \n \n \n\n\n \n Cocorullo, G.; Corsonello, P.; Frustaci, F.; Guachi-Guachi, L.; and Perri, S.\n\n\n \n\n\n\n Journal of Real-Time Image Processing, 16(5): 1407-1423. 10 2019.\n \n\n\n\n
\n\n\n\n \n \n \"MultimodalWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Multimodal background subtraction for high-performance embedded systems},\n type = {article},\n year = {2019},\n keywords = {Background subtraction,Image processing,Video systems},\n pages = {1407-1423},\n volume = {16},\n websites = {http://link.springer.com/10.1007/s11554-016-0651-6},\n month = {10},\n day = {8},\n id = {d0bc0013-6af5-37f7-b151-30836c4179e4},\n created = {2020-12-30T02:11:40.793Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:29:03.338Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Cocorullo2019},\n source_type = {article},\n private_publication = {false},\n abstract = {In many computer vision systems, background subtraction algorithms have a crucial importance to extract information about moving objects. Although color features have been extensively used in several background subtraction algorithms, demonstrating high efficiency and performances, in actual applications the background subtraction accuracy is still a challenge due to the dynamic, diverse and complex background types. In this paper, a novel method for the background subtraction is proposed to achieve low computational cost and high accuracy in real-time applications. The proposed approach computes the background model using a limited number of historical frames, thus resulting suitable for a real-time embedded implementation. To compute the background model as proposed here, pixels grayscale information and color invariant H are jointly exploited. Differently from state-of-the-art competitors, the background model is updated by analyzing the percentage changes of current pixels with respect to corresponding pixels within the modeled background and historical frames. The comparison with several traditional and real-time state-of-the-art background subtraction algorithms demonstrates that the proposed approach is able to manage several challenges, such as the presence of dynamic background and the absence of frames free from foreground objects, without undermining the accuracy achieved. Different hardware designs have been implemented, for several images resolutions, within an Avnet ZedBoard containing an xc7z020 Zynq FPGA device. Post-place and route characterization results demonstrate that the proposed approach is suitable for the integration in low-cost high-definition embedded video systems and smart cameras. In fact, the presented system uses 32 MB of external memory, 6 internal Block RAM, less than 16,000 Slices FFs, a little more than 20,000 Slices LUTs and it processes Full HD RGB video sequences with a frame rate of about 74 fps.},\n bibtype = {article},\n author = {Cocorullo, Giuseppe and Corsonello, Pasquale and Frustaci, Fabio and Guachi-Guachi, Lorena-de-los-Angeles and Perri, Stefania},\n doi = {10.1007/s11554-016-0651-6},\n journal = {Journal of Real-Time Image Processing},\n number = {5}\n}
\n
\n\n\n
\n In many computer vision systems, background subtraction algorithms have a crucial importance to extract information about moving objects. Although color features have been extensively used in several background subtraction algorithms, demonstrating high efficiency and performances, in actual applications the background subtraction accuracy is still a challenge due to the dynamic, diverse and complex background types. In this paper, a novel method for the background subtraction is proposed to achieve low computational cost and high accuracy in real-time applications. The proposed approach computes the background model using a limited number of historical frames, thus resulting suitable for a real-time embedded implementation. To compute the background model as proposed here, pixels grayscale information and color invariant H are jointly exploited. Differently from state-of-the-art competitors, the background model is updated by analyzing the percentage changes of current pixels with respect to corresponding pixels within the modeled background and historical frames. The comparison with several traditional and real-time state-of-the-art background subtraction algorithms demonstrates that the proposed approach is able to manage several challenges, such as the presence of dynamic background and the absence of frames free from foreground objects, without undermining the accuracy achieved. Different hardware designs have been implemented, for several images resolutions, within an Avnet ZedBoard containing an xc7z020 Zynq FPGA device. Post-place and route characterization results demonstrate that the proposed approach is suitable for the integration in low-cost high-definition embedded video systems and smart cameras. In fact, the presented system uses 32 MB of external memory, 6 internal Block RAM, less than 16,000 Slices FFs, a little more than 20,000 Slices LUTs and it processes Full HD RGB video sequences with a frame rate of about 74 fps.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Colorectal Segmentation with Convolutional Neural Network.\n \n \n \n \n\n\n \n Guachi, L.; Guachi, R.; Bini, F.; and Marinozzi, F.\n\n\n \n\n\n\n Computer-Aided Design and Applications, 16(5): 836-845. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"AutomaticWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Automatic Colorectal Segmentation with Convolutional Neural Network},\n type = {article},\n year = {2019},\n keywords = {Colon Segmentation,Convolutional neural network,Tissues segmentation},\n pages = {836-845},\n volume = {16},\n websites = {http://cad-journal.net/files/vol_16/Vol16No5.html},\n id = {2231dcb7-9c12-3367-a9a4-71cf8ebcd9bd},\n created = {2020-12-30T02:11:40.795Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.795Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Guachi2019},\n source_type = {article},\n private_publication = {false},\n abstract = {This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel. Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.},\n bibtype = {article},\n author = {Guachi, Lorena and Guachi, Robinson and Bini, Fabiano and Marinozzi, Franco},\n doi = {10.14733/cadaps.2019.836-845},\n journal = {Computer-Aided Design and Applications},\n number = {5}\n}
\n
\n\n\n
\n This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel. Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Geometrical Modelling Effects on FEA of Colorectal Surgery.\n \n \n \n \n\n\n \n Guachi, R.; Bici, M.; Guachi, L.; Campana, F.; Bini, F.; and Marinozzi, F.\n\n\n \n\n\n\n Computer-Aided Design and Applications, 16(4): 778-788. 11 2018.\n \n\n\n\n
\n\n\n\n \n \n \"GeometricalWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Geometrical Modelling Effects on FEA of Colorectal Surgery},\n type = {article},\n year = {2018},\n keywords = {Computer assisted surgical planning,Finite element analysis,Segmentation,Soft tissues simulations},\n pages = {778-788},\n volume = {16},\n websites = {http://cad-journal.net/files/vol_16/Vol16No4.html},\n month = {11},\n id = {19ebfcad-143c-3b6e-8864-0f3068969651},\n created = {2020-12-30T02:11:40.574Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.574Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Guachi2018},\n source_type = {article},\n private_publication = {false},\n abstract = {The research reported in this paper applies an explicit non-linear FEA solver to simulate the interaction between a clamp and a hyper-elastic material that aims to mimic the biological tissue of the colon. More in detail, the paper provides new results as a continuation of a previous works aimed at the evaluation of this solver to manage contact and dynamic loading on complex, multiple shapes. Results concern with the evaluation of the contact force during clamping, thus to the assessment of the force-feedback. The analysis is carried out on two geometries, using the hyper-elastic Mooney-Rivlin model for the mechanical behavior of the soft tissues. A pressure is applied on the colon to simulate the surgical clamp, which goes progressively in contact with tissue surface. To assess FEA criticality, and, then, its feasibility, the stress-strain and the contact force are analysed according to geometrical model and thickness variation, leaving the pressure constant. Doing so, their effect on the force-feedback can be foreseen, understanding their role on the accuracy of the final result.},\n bibtype = {article},\n author = {Guachi, Robinson and Bici, Michele and Guachi, Lorena and Campana, Francesca and Bini, Fabiano and Marinozzi, Franco},\n doi = {10.14733/cadaps.2019.778-788},\n journal = {Computer-Aided Design and Applications},\n number = {4}\n}
\n
\n\n\n
\n The research reported in this paper applies an explicit non-linear FEA solver to simulate the interaction between a clamp and a hyper-elastic material that aims to mimic the biological tissue of the colon. More in detail, the paper provides new results as a continuation of a previous works aimed at the evaluation of this solver to manage contact and dynamic loading on complex, multiple shapes. Results concern with the evaluation of the contact force during clamping, thus to the assessment of the force-feedback. The analysis is carried out on two geometries, using the hyper-elastic Mooney-Rivlin model for the mechanical behavior of the soft tissues. A pressure is applied on the colon to simulate the surgical clamp, which goes progressively in contact with tissue surface. To assess FEA criticality, and, then, its feasibility, the stress-strain and the contact force are analysed according to geometrical model and thickness variation, leaving the pressure constant. Doing so, their effect on the force-feedback can be foreseen, understanding their role on the accuracy of the final result.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Colorectal Segmentation with Convolutional Neural Network.\n \n \n \n \n\n\n \n Guachi, L.; Guachi, R.; Bini, F.; and Marinozzi, F.\n\n\n \n\n\n\n In Proceedings of CAD'18, pages 312-316, 7 2018. CAD Solutions LLC\n \n\n\n\n
\n\n\n\n \n \n \"AutomaticWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Automatic Colorectal Segmentation with Convolutional Neural Network},\n type = {inproceedings},\n year = {2018},\n pages = {312-316},\n websites = {http://www.cad-conference.net/files/CAD18/CAD18-paris.html},\n month = {7},\n publisher = {CAD Solutions LLC},\n id = {2ea3c2bb-9f96-30a1-a42e-57d68114c985},\n created = {2020-12-30T02:11:40.706Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.706Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Guachi2018a},\n source_type = {inproceedings},\n private_publication = {false},\n bibtype = {inproceedings},\n author = {Guachi, Lorena and Guachi, Robinson and Bini, Fabiano and Marinozzi, Franco},\n doi = {10.14733/cadconfP.2018.312-316},\n booktitle = {Proceedings of CAD'18}\n}
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Geometrical Modelling Effects on FEA of Colorectal Surgery.\n \n \n \n \n\n\n \n Guachi, R.; Bici, M.; Guachi, L.; Campana, F.; Bini, F.; and Marinozzi, F.\n\n\n \n\n\n\n In Proceedings of CAD'18, pages 307-311, 7 2018. CAD Solutions LLC\n \n\n\n\n
\n\n\n\n \n \n \"GeometricalWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Geometrical Modelling Effects on FEA of Colorectal Surgery},\n type = {inproceedings},\n year = {2018},\n pages = {307-311},\n websites = {http://www.cad-conference.net/files/CAD18/CAD18-paris.html},\n month = {7},\n publisher = {CAD Solutions LLC},\n id = {f4fea1ac-b432-3e3f-8405-0f6cb3f08a96},\n created = {2020-12-30T02:11:40.986Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.986Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Guachi2018b},\n source_type = {inproceedings},\n private_publication = {false},\n bibtype = {inproceedings},\n author = {Guachi, Robinson and Bici, Michele and Guachi, Lorena and Campana, Francesca and Bini, Fabiano and Marinozzi, Franco},\n doi = {10.14733/cadconfP.2018.307-311},\n booktitle = {Proceedings of CAD'18}\n}
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Color Invariant Study for Background Subtraction.\n \n \n \n \n\n\n \n Guachi, L.; Cocorullo, G.; Corsonello, P.; Frustaci, F.; and Perri, S.\n\n\n \n\n\n\n CENICS 2016 : The Ninth International Conference on Advances in Circuits, Electronics and Micro-electronics. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"ColorWebsite\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{\n title = {Color Invariant Study for Background Subtraction},\n type = {article},\n year = {2016},\n keywords = {-image processing,background subtraction,color},\n websites = {https://www.thinkmind.org/articles/cenics_2016_1_10_60015.pdf},\n id = {83ddd601-6ab1-3bd5-88f8-e9262a80f376},\n created = {2020-12-30T02:11:40.451Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.451Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Guachi2016},\n source_type = {article},\n private_publication = {false},\n abstract = {—Effectiveness detection to extract objects of interest is a fundamental step in many computer vision systems. In real solutions, the accurate Background Subtraction (BS) is a challenge due to diverse and complex background types. Being the color widely used as descriptor to improve accuracy in several BS algorithms, in this paper we analyze four Color Invariants (CIs) based on the Kubelka-Munk theory combined with Gray scale. The capability of several CIs combinations in segmenting foreground is evaluated referring to five video sequences. This experimental study provides a point-of-view to choose the best color combination considering accuracy and the channel numbers which can be applied for image segmentation. The results demonstrate that the combination of the color invariant H with Gray scale achieves higher performance for foreground segmentation for both indoor and outdoor video sequences. Furthermore, it uses the minimum number of color channels.},\n bibtype = {article},\n author = {Guachi, Lorena and Cocorullo, Giuseppe and Corsonello, Pasquale and Frustaci, Fabio and Perri, Stefania},\n journal = {CENICS 2016 : The Ninth International Conference on Advances in Circuits, Electronics and Micro-electronics}\n}
\n
\n\n\n
\n —Effectiveness detection to extract objects of interest is a fundamental step in many computer vision systems. In real solutions, the accurate Background Subtraction (BS) is a challenge due to diverse and complex background types. Being the color widely used as descriptor to improve accuracy in several BS algorithms, in this paper we analyze four Color Invariants (CIs) based on the Kubelka-Munk theory combined with Gray scale. The capability of several CIs combinations in segmenting foreground is evaluated referring to five video sequences. This experimental study provides a point-of-view to choose the best color combination considering accuracy and the channel numbers which can be applied for image segmentation. The results demonstrate that the combination of the color invariant H with Gray scale achieves higher performance for foreground segmentation for both indoor and outdoor video sequences. Furthermore, it uses the minimum number of color channels.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Embedded surveillance system using background subtraction and Raspberry Pi.\n \n \n \n \n\n\n \n Cocorullo, G.; Corsonello, P.; Frustaci, F.; Guachi, L.; and Perri, S.\n\n\n \n\n\n\n In 2015 AEIT International Annual Conference (AEIT), pages 1-5, 10 2015. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"EmbeddedWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {Embedded surveillance system using background subtraction and Raspberry Pi},\n type = {inproceedings},\n year = {2015},\n keywords = {Background subtraction,Electronics,Embedded systems,Image processing,Raspberry Pi},\n pages = {1-5},\n websites = {http://ieeexplore.ieee.org/document/7415219/},\n month = {10},\n publisher = {IEEE},\n id = {127fc616-6866-3321-bfa7-570f94466149},\n created = {2020-12-30T02:11:40.409Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.409Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Cocorullo2015},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {One of the most challenging problems in computer vision is the ability of understanding video sequences to automatically detect and recognize moving objects. This work presents the development and the inexpensive implementation of an efficient algorithm based on the background subtraction technique adequate for low-cost embedded video surveillance systems. The proposed algorithm exploits the combination of few historical frames with the use of two channels based on the invariant color H and the grayscale level information to achieve high performance and good quality also within the Raspberry-Pi platform. Experimental results show that the implemented algorithm is robust against noises typically occurring in both indoor and outdoor environments.},\n bibtype = {inproceedings},\n author = {Cocorullo, Giuseppe and Corsonello, Pasquale and Frustaci, Fabio and Guachi, Lorena and Perri, Stefania},\n doi = {10.1109/AEIT.2015.7415219},\n booktitle = {2015 AEIT International Annual Conference (AEIT)}\n}
\n
\n\n\n
\n One of the most challenging problems in computer vision is the ability of understanding video sequences to automatically detect and recognize moving objects. This work presents the development and the inexpensive implementation of an efficient algorithm based on the background subtraction technique adequate for low-cost embedded video surveillance systems. The proposed algorithm exploits the combination of few historical frames with the use of two channels based on the invariant color H and the grayscale level information to achieve high performance and good quality also within the Raspberry-Pi platform. Experimental results show that the implemented algorithm is robust against noises typically occurring in both indoor and outdoor environments.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n A novel background subtraction method based on color invariants and grayscale levels.\n \n \n \n \n\n\n \n Guachi, L.; Cocorullo, G.; Corsonello, P.; Frustaci, F.; and Perri, S.\n\n\n \n\n\n\n In 2014 International Carnahan Conference on Security Technology (ICCST), pages 1-5, 10 2014. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"AWebsite\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{\n title = {A novel background subtraction method based on color invariants and grayscale levels},\n type = {inproceedings},\n year = {2014},\n keywords = {Background subtraction,Video systems,automatic monitoring},\n pages = {1-5},\n websites = {http://ieeexplore.ieee.org/document/6987024/},\n month = {10},\n publisher = {IEEE},\n id = {1b339a2f-f544-3fb5-bbcf-d9558c254188},\n created = {2020-12-30T02:11:40.230Z},\n file_attached = {false},\n profile_id = {e5f1b339-ec56-313b-b123-fd0a1c527f0d},\n last_modified = {2020-12-30T02:11:40.230Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Guachi2014},\n source_type = {inproceedings},\n private_publication = {false},\n abstract = {This paper presents a new method for background subtraction which takes advantages of using the color invariants combined with gray color. The proposed method works robustly reducing misclassified foreground objects. Gaussian mixtures are exploited for each pixel through two channels: the color invariants, which are derived from a physical model, and the gray colors obtained as a descriptor of the image. The background models update is performed using a random process selected considering that in many practical situations it is not necessary to update each background pixel model for each new frame. The novel algorithm has been compared to three state-of-the-art methods. Experimental results demonstrate the proposed method achieves a higher robustness, is less sensitive to noise and increases the number of pixel correctly classified as foreground for both indoor and outdoor video sequences.},\n bibtype = {inproceedings},\n author = {Guachi, Lorena and Cocorullo, Giuseppe and Corsonello, Pasquale and Frustaci, Fabio and Perri, Stefania},\n doi = {10.1109/CCST.2014.6987024},\n booktitle = {2014 International Carnahan Conference on Security Technology (ICCST)}\n}
\n
\n\n\n
\n This paper presents a new method for background subtraction which takes advantages of using the color invariants combined with gray color. The proposed method works robustly reducing misclassified foreground objects. Gaussian mixtures are exploited for each pixel through two channels: the color invariants, which are derived from a physical model, and the gray colors obtained as a descriptor of the image. The background models update is performed using a random process selected considering that in many practical situations it is not necessary to update each background pixel model for each new frame. The novel algorithm has been compared to three state-of-the-art methods. Experimental results demonstrate the proposed method achieves a higher robustness, is less sensitive to noise and increases the number of pixel correctly classified as foreground for both indoor and outdoor video sequences.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);