\n \n \n
\n
\n\n \n \n \n \n \n \n Women Engineers to the Power.\n \n \n \n \n\n\n \n Mery, D.; and Kuzmicic, J.\n\n\n \n\n\n\n
Journal I3, School of Engineering, 2017(9): 2-3. 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Mery2017:I3-Editorial,\n title={Women Engineers to the Power},\nauthor={Mery, D. and Kuzmicic, J.},\nyear = 2017,\njournal = {Journal I3, School of Engineering},\nvolume={2017},\nnumber={9},\npages={2-3},\nurl = {http://dmery.sitios.ing.uc.cl/Prints/Other-Journals/2017-JournalI3-Editorial.pdf}}\n\n\n
\n
\n\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Method for Automatic Surface Inspection using Models-Based 3D Descriptor.\n \n \n \n \n\n\n \n Madrigal, C.; Branch, J.; Restrepo, A.; and Mery, D.\n\n\n \n\n\n\n
Sensors, 17(10): 2262. 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Mery2017:Sensors,\n title={Method for Automatic Surface Inspection using Models-Based 3D\nDescriptor},\nauthor={Madrigal, C.A. and Branch, J.W. and Restrepo, A. and Mery, D.},\nyear = 2017,\njournal = {Sensors},\nvolume={17},\nnumber={10},\npages={2262},\nurl = {http://www.mdpi.com/1424-8220/17/10/2262/pdf},\ndoi = {10.3390/s17102262},\nabstract = {Automatic visual inspection allows identifying surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, its detecting and recognizing is a challenge. In particular, if the defect generates topological deformations that are not shown as a strong contrast in the 2D image. In this paper, we presented a method to recognize surface defects on 3d point clouds. First, we propose a novel 3d local descriptor called MPFH (Model Point Feature Histogram) for defect detection. Our descriptor is inspired from earlier one such as PFH (Point Feature Histogram). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Second, through a classification stage, the points on the surface are labeled in 5 types of primitives and the defect is detected. Third, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces, 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and to the scale factor and sufficiently discriminative to detect some surface defects. The performance evaluation of the proposed method was performed for a classification task of 3D point cloud in primitives, reporting an accuracy of 95\\% and higher than other state-of-art descriptors. The rate of recognition of defects was close to 94\\%.}\n}\n\n
\n
\n\n\n
\n Automatic visual inspection allows identifying surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, its detecting and recognizing is a challenge. In particular, if the defect generates topological deformations that are not shown as a strong contrast in the 2D image. In this paper, we presented a method to recognize surface defects on 3d point clouds. First, we propose a novel 3d local descriptor called MPFH (Model Point Feature Histogram) for defect detection. Our descriptor is inspired from earlier one such as PFH (Point Feature Histogram). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Second, through a classification stage, the points on the surface are labeled in 5 types of primitives and the defect is detected. Third, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces, 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and to the scale factor and sufficiently discriminative to detect some surface defects. The performance evaluation of the proposed method was performed for a classification task of 3D point cloud in primitives, reporting an accuracy of 95% and higher than other state-of-art descriptors. The rate of recognition of defects was close to 94%.\n
\n\n\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task.\n \n \n \n \n\n\n \n Moenne-Loccoz, C.; Vergara, R.; Lopez, V.; Mery, D.; and Cosmelli, D.\n\n\n \n\n\n\n
Frontiers in Computational Neuroscience, 11(80). 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Mery2017:Frontiers,\n title={Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task},\nauthor={Moenne-Loccoz, C. and Vergara, R.C. and Lopez, V. and Mery, D. and Cosmelli, D.},\nyear = 2017,\njournal = {Frontiers in Computational Neuroscience},\nvolume = 11,\nnumber = 80,\nurl = {http://dmery.sitios.ing.uc.cl/Prints/ISI-Journals/2017-FNCOM.pdf},\nabstract = {Our daily interaction with the world is plagued of situations in which we develop experience through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.}\n}\n\n\n\n
\n
\n\n\n
\n Our daily interaction with the world is plagued of situations in which we develop experience through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Comparing Neural and Attractiveness-based Visual Features for Artwork Recommendation.\n \n \n \n \n\n\n \n Dominguez, V.; Messina, P.; Parra, D.; Mery, D.; Trattner, C.; and A., S.\n\n\n \n\n\n\n
arXiv preprint arXiv:1706.07515v1. 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{RecSys2017:Artwork,\n title={Comparing Neural and Attractiveness-based Visual Features for Artwork Recommendation},\n author={Dominguez, V. and Messina, P. and Parra, D. and Mery, D. and Trattner, C. and Soto A.},\n journal={arXiv preprint arXiv:1706.07515v1},\n url = {https://arxiv.org/abs/1706.07515},\n abstract = {Advances in image processing and computer vision in the latest years have brought about the use of visual features in artwork recommendation. Recent works have shown that visual features obtained from pre-trained deep neural networks (DNNs) perform very well for recommending digital art. Other recent works have shown that explicit visual features (EVF) based on attractiveness can perform well in preference prediction tasks, but no previous work has compared DNN features versus specific attractiveness-based visual features (e.g. brightness, texture) in terms of recommendation performance. In this work, we study and compare the performance of DNN and EVF features for the purpose of physical artwork recommendation using transaction data from UGallery, an online store of physical paintings. In addition, we perform an exploratory analysis to understand if DNN embedded features have some relation with certain EVF. Our results show that DNN features outperform EVF, that certain EVF features are more suited for physical artwork recommendation and, finally, we show evidence that certain neurons in the DNN might be partially encoding visual features such as brightness, providing an opportunity for explaining recommendations based on visual neural models.},\n year={2017}\n}\n\n\n\n\n\n
\n
\n\n\n
\n Advances in image processing and computer vision in the latest years have brought about the use of visual features in artwork recommendation. Recent works have shown that visual features obtained from pre-trained deep neural networks (DNNs) perform very well for recommending digital art. Other recent works have shown that explicit visual features (EVF) based on attractiveness can perform well in preference prediction tasks, but no previous work has compared DNN features versus specific attractiveness-based visual features (e.g. brightness, texture) in terms of recommendation performance. In this work, we study and compare the performance of DNN and EVF features for the purpose of physical artwork recommendation using transaction data from UGallery, an online store of physical paintings. In addition, we perform an exploratory analysis to understand if DNN embedded features have some relation with certain EVF. Our results show that DNN features outperform EVF, that certain EVF features are more suited for physical artwork recommendation and, finally, we show evidence that certain neurons in the DNN might be partially encoding visual features such as brightness, providing an opportunity for explaining recommendations based on visual neural models.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n Learning Face Similarity for Re-identification from Real Surveillance Video: A Deep Metric Solution.\n \n \n \n\n\n \n Li, P.; Flynn, P.; Mery, D.; and Prieto, M.\n\n\n \n\n\n\n In
International Joint Conference on Biometrics (IJCB2017), 2017. \n
\n\n
\n\n
\n\n
\n\n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Mery2017:IJCB, \nauthor={Li, P. and Flynn, P. and Mery, D. and Prieto, M.L.}, \nbooktitle={International Joint Conference on Biometrics (IJCB2017)},\ntitle={Learning Face Similarity for Re-identification from Real Surveillance Video: A Deep Metric Solution}, \nyear={2017},\nabstract = {Person re-identification (ReID) is the task of automatically matching persons across surveillance cameras with location or time differences. Nearly all proposed ReID approaches exploit body features. Even if successfully captured in the scene, faces are often assumed to be unhelpful to the ReID process. As cameras and surveillance systems improve, `Facial ReID' approaches deserve attention. The following contributions are made in this work: 1) We describe a high-quality dataset for person re-identification featuring faces. This dataset was collected from a real surveillance network in a municipal rapid transit system, and includes the same people appearing in multiple sites at multiple dimes wearing different attire. 2) We employ new DNN architectures and patch matching techniques to handle face misalignment in quality regimes where landmarking fails. We further boost the performance by adopting the fully convolutional structure and spatial pyramid pooling (SPP).\n}\n}\n\n\n
\n
\n\n\n
\n Person re-identification (ReID) is the task of automatically matching persons across surveillance cameras with location or time differences. Nearly all proposed ReID approaches exploit body features. Even if successfully captured in the scene, faces are often assumed to be unhelpful to the ReID process. As cameras and surveillance systems improve, `Facial ReID' approaches deserve attention. The following contributions are made in this work: 1) We describe a high-quality dataset for person re-identification featuring faces. This dataset was collected from a real surveillance network in a municipal rapid transit system, and includes the same people appearing in multiple sites at multiple dimes wearing different attire. 2) We employ new DNN architectures and patch matching techniques to handle face misalignment in quality regimes where landmarking fails. We further boost the performance by adopting the fully convolutional structure and spatial pyramid pooling (SPP). \n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Biometría vs. Ficción.\n \n \n \n \n\n\n \n Mery, D.\n\n\n \n\n\n\n In
Especial de Videovigilancia y Control de Acceso, El Mercurio (23/05/17). 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@incollection{Mery2017:ElMercurio,\n\tAuthor = {Mery, D.},\n\tTitle = {{Biometr\\'ia vs. Ficci\\'on}},\n booktitle ={ Especial de Videovigilancia y Control de Acceso, El Mercurio (23/05/17)},\n\turl = {http://impresa.elmercurio.com/Pages/NewsDetail.aspx?dt=2017-05-23&PaginaId=4&SupplementId=19&BodyId=17},\n\tYear = {2017}}\n\n\n
\n
\n\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n A Logarithmic X-ray Imaging Model for Baggage Inspection: Simulation and Object Detection.\n \n \n \n \n\n\n \n Mery, D.; and Katsaggelos, A.\n\n\n \n\n\n\n In
13th IEEE CVPR Workshop on Perception Beyond the Visible Spectrum (PBVS 2017), 2017. \n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Mery2017:PBVS, \nauthor={Mery, D. and Katsaggelos, A.K.}, \nbooktitle={13th IEEE CVPR Workshop on Perception Beyond the Visible Spectrum (PBVS 2017)},\ntitle={A Logarithmic X-ray Imaging Model for Baggage Inspection: Simulation and Object Detection}, \nyear={2017},\nabstract = {In the last years, many computer vision algorithms have been developed for X-ray testing tasks. Some of them deal with baggage inspection, in which the aim is to detect automatically target objects. The progress in automated baggage inspection, however, is modest and very limited compared to what is needed because X-ray screening systems are still being manipulated by human inspectors. In this work, we present an X-ray imaging model that can separate foreground from background in baggage screening. The model can be used in two main tasks: i) Simulation of new X-ray images, where simulated images can be used in training programs for human inspectors, or can be used to enhance datasets for computer vision algorithms. ii) Detection of (threat) objects, where new algorithms can be employed to perform automated baggage inspection or to aid an user in the inspection task showing potential threats. In our model, rather than a multiplication of foreground and background, that is typically used in X-ray imaging, we propose the addition of logarithmic images. This allows the use of linear strategies to superimpose images of threat objects onto X-ray images and the use of sparse representations in order segment target objects. In our experiments, we simulate new X-ray images of handguns, shuriken and razor blades, in which it is impossible to distinguish simulated and real X-ray images. In addition, we show in our experiments the effective detection of shuriken, razor blades and handguns using the proposed algorithm.},\nurl = {http://dmery.sitios.ing.uc.cl/Prints/Conferences/International/2017-PBVS.pdf}\n}\n\n\n\n
\n
\n\n\n
\n In the last years, many computer vision algorithms have been developed for X-ray testing tasks. Some of them deal with baggage inspection, in which the aim is to detect automatically target objects. The progress in automated baggage inspection, however, is modest and very limited compared to what is needed because X-ray screening systems are still being manipulated by human inspectors. In this work, we present an X-ray imaging model that can separate foreground from background in baggage screening. The model can be used in two main tasks: i) Simulation of new X-ray images, where simulated images can be used in training programs for human inspectors, or can be used to enhance datasets for computer vision algorithms. ii) Detection of (threat) objects, where new algorithms can be employed to perform automated baggage inspection or to aid an user in the inspection task showing potential threats. In our model, rather than a multiplication of foreground and background, that is typically used in X-ray imaging, we propose the addition of logarithmic images. This allows the use of linear strategies to superimpose images of threat objects onto X-ray images and the use of sparse representations in order segment target objects. In our experiments, we simulate new X-ray images of handguns, shuriken and razor blades, in which it is impossible to distinguish simulated and real X-ray images. In addition, we show in our experiments the effective detection of shuriken, razor blades and handguns using the proposed algorithm.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Threat Objects Detection in X-ray Images Using an Active Vision Approach.\n \n \n \n \n\n\n \n Riffo, V.; Flores, S.; and Mery, D.\n\n\n \n\n\n\n
Journal of Nondestructive Evaluation, 36(3): 44. 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Mery2017:JNDE-Active,\nauthor="Riffo, Vladimir\nand Flores, Sebastian\nand Mery, Domingo",\ntitle="Threat Objects Detection in X-ray Images Using an Active Vision Approach",\njournal="Journal of Nondestructive Evaluation",\nyear="2017",\nvolume="36",\nnumber="3",\npages="44",\nabstract="X-ray testing for baggage inspection has been increasingly used at airports, reducing the risk of terrorist crimes and attacks. Nevertheless, this task is still being carried out by human inspectors and with limited technological support. The technology that is being used is not always effective, as it depends mainly on the position of the object of interest, occlusion, and the accumulated experience of the inspector. Due to this problem, we have developed an approach that inspects X-ray images using active vision in order to automatically detect objects that represent a threat. Our method includes three steps: detection of potential threat objects in single views based on the similarity of features and spatial distribution; estimation of the best-next-view using Q-learning; and elimination of false alarms based on multiple view constraints. We tested our algorithm on X-ray images that included handguns and razor blades. In the detection of handguns we registered good results for recall and precision (Re = 67{\\%}, Pr = 83{\\%}) along with a high performance in the detection of razor blades (Re = 82{\\%}, Pr = 100{\\%}) taking into consideration 360 inspections in each case. Our results indicate that non-destructive inspection actively using X-ray images, leads to more effective object detection in complex environments, and helps to offset certain levels of occlusion and the internal disorder of baggage.",\nissn="1573-4862",\ndoi="10.1007/s10921-017-0419-3",\nurl="http://dx.doi.org/10.1007/s10921-017-0419-3"\n}\n\n
\n
\n\n\n
\n X-ray testing for baggage inspection has been increasingly used at airports, reducing the risk of terrorist crimes and attacks. Nevertheless, this task is still being carried out by human inspectors and with limited technological support. The technology that is being used is not always effective, as it depends mainly on the position of the object of interest, occlusion, and the accumulated experience of the inspector. Due to this problem, we have developed an approach that inspects X-ray images using active vision in order to automatically detect objects that represent a threat. Our method includes three steps: detection of potential threat objects in single views based on the similarity of features and spatial distribution; estimation of the best-next-view using Q-learning; and elimination of false alarms based on multiple view constraints. We tested our algorithm on X-ray images that included handguns and razor blades. In the detection of handguns we registered good results for recall and precision (Re = 67%, Pr = 83%) along with a high performance in the detection of razor blades (Re = 82%, Pr = 100%) taking into consideration 360 inspections in each case. Our results indicate that non-destructive inspection actively using X-ray images, leads to more effective object detection in complex environments, and helps to offset certain levels of occlusion and the internal disorder of baggage.\n
\n\n\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n Modelo pionero de entrenamiento en trauma vascular impreso en 3D en base a imágenes de pacientes reales: un trabajo interdisciplinario de simulación en educación quirúrgica.\n \n \n \n\n\n \n Achurra, P.; Mondragón, G.; Caro, I.; Figueroa, D.; Marine, L.; Mery, D.; and Martínez, Jorge\n\n\n \n\n\n\n
Investigación en Educación Médica, 6(22): 133-134. 2017.\n
\n\n
\n\n
\n\n
\n\n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Mery2017:3DModel,\n title={Modelo pionero de entrenamiento en trauma vascular impreso en 3D en base a im{\\'a}genes de pacientes reales: un trabajo interdisciplinario de simulaci{\\'o}n en educaci{\\'o}n quir{\\'u}rgica},\n author={Achurra, Pablo and Mondrag{\\'o}n, Germ{\\'a}n and Caro, Iv{\\'a}n and Figueroa, Daniela and Marine, Leopoldo and Mery, Domingo and Mart{\\'i}nez, Jorge},\n journal={Investigaci{\\'o}n en Educaci{\\'o}n M{\\'e}dica},\n volume={6},\n number={22},\n pages={133-134},\n year={2017},\n publisher={Elsevier}\n}\n
\n
\n\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Face Recognition Using Sparse Fingerprint Classification Algorithm.\n \n \n \n \n\n\n \n Larrain, T.; Bernhard, J.; Mery, D.; and Bowyer, K.\n\n\n \n\n\n\n
IEEE Transactions on Information, Forensics and Security, 12(7): 1646-1657. 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Mery2017:IEEE-TIFS,\n title={Face Recognition Using Sparse Fingerprint Classification Algorithm},\nauthor={Larrain, T. and Bernhard, J. and Mery, D. and Bowyer, K.}, \njournal = {IEEE Transactions on Information, Forensics and Security},\nvolume = 12,\nnumber = 7,\npages = {1646-1657},\n year = 2017,\n abstract = {Unconstrained face recognition is still an open problem as state-of-the-art algorithms have not yet reached high recognition performance in real-world environments. This paper addresses this problem by proposing a new approach called Sparse Fingerprint Classification Algorithm (SFCA). In the training phase, for each enrolled subject, a grid of patches is extracted from each subject's face images in order to construct representative dictionaries. In the testing phase, a grid is extracted from the query image and every patch is transformed into a binary sparse representation using the dictionary, creating a fingerprint of the face. The binary coefficients vote for their corresponding classes and the maximum-vote class decides the identity of the query image. Experiments were carried out on seven widely-used face databases. The results demonstrate that when the size of the dataset is small or medium (i.e., the number of subjects is not greater than one hundred), SFCA is able to deal with a larger degree of variability in ambient lighting, pose, expression, occlusion, face size, and distance from the camera than other current state-of-the-art algorithms.},\n url = {http://ieeexplore.ieee.org/document/7875165/},\n doi = {10.1109/TIFS.2017.2680403}\n}\n\n\n\n
\n
\n\n\n
\n Unconstrained face recognition is still an open problem as state-of-the-art algorithms have not yet reached high recognition performance in real-world environments. This paper addresses this problem by proposing a new approach called Sparse Fingerprint Classification Algorithm (SFCA). In the training phase, for each enrolled subject, a grid of patches is extracted from each subject's face images in order to construct representative dictionaries. In the testing phase, a grid is extracted from the query image and every patch is transformed into a binary sparse representation using the dictionary, creating a fingerprint of the face. The binary coefficients vote for their corresponding classes and the maximum-vote class decides the identity of the query image. Experiments were carried out on seven widely-used face databases. The results demonstrate that when the size of the dataset is small or medium (i.e., the number of subjects is not greater than one hundred), SFCA is able to deal with a larger degree of variability in ambient lighting, pose, expression, occlusion, face size, and distance from the camera than other current state-of-the-art algorithms.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Automatic Defect Recognition in X-ray Testing using Computer Vision.\n \n \n \n \n\n\n \n Mery, D.; and Arteta, C.\n\n\n \n\n\n\n In
2017 IEEE Winter Conference on Applications of Computer Vision (WACV2017), 2017. \n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Mery2017:WACV, \nauthor={Mery, D. and Arteta, C.}, \nbooktitle={2017 IEEE Winter Conference on Applications of Computer Vision (WACV2017)},\ntitle={Automatic Defect Recognition in X-ray Testing using Computer Vision}, \nyear={2017},\nabstract = {To ensure safety in the construction of important metallic components for roadworthiness, it is necessary to check every component thoroughly using non-destructive testing. In last decades, X-ray testing has been adopted as the principal non-destructive testing method to identify defects within a component which are undetectable to the naked eye. Nowadays, modern computer vision techniques, such as deep learning and sparse representations, are opening new avenues in automatic object recognition in optical images. These techniques have been broadly used in object and texture recognition by the computer vision community with promising results in optical images. However, a comprehensive evaluation in X-ray testing is required. In this paper, we release a new dataset containing around 47.500 cropped X-ray images of 32 x 32 pixels with defects and no-defects in automotive components. Using this dataset, we evaluate and compare 24 computer vision techniques including deep learning, sparse representations, local descriptors and texture features, among others. We show in our experiments that the best performance was achieved by a simple LBP descriptor with a SVM-linear classifier obtaining 97\\% precision and 94\\% recall. We believe that the methodology presented could be used in similar projects that have to deal with automated detection of defects.},\nurl = {http://dmery.sitios.ing.uc.cl/Prints/Conferences/International/2017-WACV.pdf},\n}\n\n\n\n
\n
\n\n\n
\n To ensure safety in the construction of important metallic components for roadworthiness, it is necessary to check every component thoroughly using non-destructive testing. In last decades, X-ray testing has been adopted as the principal non-destructive testing method to identify defects within a component which are undetectable to the naked eye. Nowadays, modern computer vision techniques, such as deep learning and sparse representations, are opening new avenues in automatic object recognition in optical images. These techniques have been broadly used in object and texture recognition by the computer vision community with promising results in optical images. However, a comprehensive evaluation in X-ray testing is required. In this paper, we release a new dataset containing around 47.500 cropped X-ray images of 32 x 32 pixels with defects and no-defects in automotive components. Using this dataset, we evaluate and compare 24 computer vision techniques including deep learning, sparse representations, local descriptors and texture features, among others. We show in our experiments that the best performance was achieved by a simple LBP descriptor with a SVM-linear classifier obtaining 97% precision and 94% recall. We believe that the methodology presented could be used in similar projects that have to deal with automated detection of defects.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Modern Computer Vision Techniques for X-ray Testing in Baggage Inspection.\n \n \n \n \n\n\n \n Mery, D.; Svec, E.; Arias, M.; Riffo, V.; Saavedra, J.; and Banerjee, S.\n\n\n \n\n\n\n
IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(4): 682-692. 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Mery2016:IEEE-SMCMa,\n title={Modern Computer Vision Techniques for X-ray Testing in Baggage Inspection},\nauthor={Mery, D. and Svec, E. and Arias, M. and Riffo, V. and Saavedra, J.M. and Banerjee, S.}, \njournal = {IEEE Transactions on Systems, Man, and Cybernetics: Systems},\n volume = 47,\n number = 4,\n pages = {682-692},\n year = 2017,\n abstract = {X-ray screening systems have been used to safeguard environments in which access control is of paramount importance. Security checkpoints have been placed at the entrances to many public places to detect prohibited items such as handguns and explosives. Generally, human operators are in charge of these tasks as automated recognition in baggage inspection is still far from perfect. Research and development on X-ray testing is, however, exploring new approaches based on computer vision that can be used to aid human operators. This paper attempts to make a contribution to the field of object recognition in X-ray testing by evaluating different computer vision strategies that have been proposed in the last years. We tested ten approaches. They are based on bag of words, sparse representations, deep learning and classic pattern recognition schemes among others. For each method, we i) present a brief explanation, ii) show experimental results on the same database, and iii) provide concluding remarks discussing pros and cons of each method. In order to make fair comparisons, we define a common experimental protocol based on training, validation and testing data (selected from the public GDXray database). The effectiveness of each method was tested in the recognition of three different threat objects: handguns, shuriken (ninja stars) and razor blades. In our experiments, the highest recognition rate was achieved by methods based on visual vocabularies and deep features with more than 95\\% of accuracy. We strongly believe that it is possible to design an automated aid for the human inspection task using these computer vision algorithms.},\n doi = {10.1109/TSMC.2016.2628381},\n url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7775025&tag=1}\n}\n\n\n\n
\n
\n\n\n
\n X-ray screening systems have been used to safeguard environments in which access control is of paramount importance. Security checkpoints have been placed at the entrances to many public places to detect prohibited items such as handguns and explosives. Generally, human operators are in charge of these tasks as automated recognition in baggage inspection is still far from perfect. Research and development on X-ray testing is, however, exploring new approaches based on computer vision that can be used to aid human operators. This paper attempts to make a contribution to the field of object recognition in X-ray testing by evaluating different computer vision strategies that have been proposed in the last years. We tested ten approaches. They are based on bag of words, sparse representations, deep learning and classic pattern recognition schemes among others. For each method, we i) present a brief explanation, ii) show experimental results on the same database, and iii) provide concluding remarks discussing pros and cons of each method. In order to make fair comparisons, we define a common experimental protocol based on training, validation and testing data (selected from the public GDXray database). The effectiveness of each method was tested in the recognition of three different threat objects: handguns, shuriken (ninja stars) and razor blades. In our experiments, the highest recognition rate was achieved by methods based on visual vocabularies and deep features with more than 95% of accuracy. We strongly believe that it is possible to design an automated aid for the human inspection task using these computer vision algorithms.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Object recognition in X-ray testing using an efficient search algorithm in multiple views.\n \n \n \n \n\n\n \n Mery, D.; Riffo, V.; Zuccar, I.; and Pieringer, C.\n\n\n \n\n\n\n
Insight - Non-Destructive Testing and Condition Monitoring, 59(2): 85-92. 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{Mery2017:Insight,\n title={Object recognition in X-ray testing using an efficient search algorithm in multiple views},\nauthor={Mery, D. and Riffo, V. and Zuccar, I. and Pieringer, C.}, \n journal={Insight - Non-Destructive Testing and Condition Monitoring},\n volume=59,\n number = 2,\n pages ={85-92},\n year = 2017,\n abstract = {In order to reduce the security risk of a commercial aircraft, passengers are not allowed to take certain items in their carry-on baggage. For this reason, human operators are trained to detect prohibited items using a manually controlled baggage screening process. In this paper, we propose the use of an automated method based on multiple X-ray views to recognize certain regular objects with highly defined shapes and sizes. The method consists of two steps: `monocular analysis', to obtain possible detections in each view of a sequence, and `multiple view analysis', to recognize the objects of interest using matchings in all views. The search for matching candidates is efficiently performed using a lookup table that is computed off-line. In order to illustrate the effectiveness of the proposed method, experimental results on recognizing regular objects -clips, springs and razor blades- in pen cases are shown achieving high precision and recall ($Pr =$ 95.7\\% , $Re =$ 92.5\\%) for 120 objects. We believe that it would be possible to design an automated aid in a target detection task using the proposed algorithm.},\n url = {http://dmery.sitios.ing.uc.cl/Prints/ISI-Journals/2017-Insight-MultiX-ray.pdf},\n}\n\n\n\n
\n
\n\n\n
\n In order to reduce the security risk of a commercial aircraft, passengers are not allowed to take certain items in their carry-on baggage. For this reason, human operators are trained to detect prohibited items using a manually controlled baggage screening process. In this paper, we propose the use of an automated method based on multiple X-ray views to recognize certain regular objects with highly defined shapes and sizes. The method consists of two steps: `monocular analysis', to obtain possible detections in each view of a sequence, and `multiple view analysis', to recognize the objects of interest using matchings in all views. The search for matching candidates is efficiently performed using a lookup table that is computed off-line. In order to illustrate the effectiveness of the proposed method, experimental results on recognizing regular objects -clips, springs and razor blades- in pen cases are shown achieving high precision and recall ($Pr =$ 95.7% , $Re =$ 92.5%) for 120 objects. We believe that it would be possible to design an automated aid in a target detection task using the proposed algorithm.\n
\n\n\n
\n\n\n\n\n\n