\n \n \n
\n
\n\n \n \n \n \n \n ERIKA—Early Robotics Introduction at Kindergarten Age.\n \n \n \n\n\n \n Schiffer, S.; and Ferrein, A.\n\n\n \n\n\n\n
Multimodal Technologies Interact. 2, 64(4). 2018.\n
\n\n
\n\n
\n\n
\n\n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@Article{SchifferF19,\n author = \t {S. Schiffer and A. Ferrein },\n title = \t {{ERIKA—Early Robotics Introduction at Kindergarten Age}},\n journal = \t {Multimodal Technologies Interact. 2},\n year = \t {2018},\n volume = \t {64},\n number = \t {4},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n Enhancing Software and Hardware Reliability for a Successful Participation in the RoboCup Logistics League 2017.\n \n \n \n\n\n \n Hofmann, T.; Mataré, V.; Neumann, T.; Schönitz, S.; Henke, C.; Limpert, N.; Niemueller, T.; Ferrein, A.; Jeschke, S.; and Lakemeyer, G.\n\n\n \n\n\n\n In Akiyama, H.; Obst, O.; Sammut, C.; and Tonidandel, F., editor(s),
RoboCup 2017: Robot World Cup XXI, volume 11175, of
Lecture Notes in Computer Science, pages 486–497, 2018. Springer\n
\n\n
\n\n
\n\n
\n\n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{HofmannMNSHLNFJ18,\n author = {Till Hofmann and Victor Matar{\\'{e}} and Tobias Neumann\n and Sebastian Sch{\\"{o}}nitz and Christoph Henke and\n Nicolas Limpert and Tim Niemueller and Alexander\n Ferrein and Sabina Jeschke and Gerhard Lakemeyer},\n title = {Enhancing Software and Hardware Reliability for a\n Successful Participation in the RoboCup Logistics\n League 2017},\n editor = {Hidehisa Akiyama and Oliver Obst and Claude Sammut and\n Flavio Tonidandel},\n booktitle = {RoboCup 2017: Robot World Cup {XXI}},\n pages = {486--497},\n series = {Lecture Notes in Computer Science},\n volume = {11175},\n publisher = {Springer},\n year = {2018}\n}\n\n\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%% 2017\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n Direct Volume Rendering in Virtual Reality.\n \n \n \n\n\n \n Scholl, I.; Suder, S.; and Schiffer, S.\n\n\n \n\n\n\n In Maier, A.; Deserno, T. M.; Handels, H.; Maier-Hein, K. H.; Palm, C.; and Tolxdorff, T., editor(s),
Bildverarbeitung für die Medizin 2018, pages 297–302, Berlin, Heidelberg, March 2018. Springer\n
\n\n
\n\n
\n\n
\n\n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@InProceedings{ Scholl:Suder:Schiffer:BVM2018:MedicVR,\r\n title = "{Direct Volume Rendering in Virtual Reality}",\r\n author = "Scholl, Ingrid and Suder, Sebastian and Schiffer, Stefan",\r\n editor = "Maier, Andreas and Deserno, Thomas M. and Handels, Heinz and Maier-Hein, Klaus Hermann and Palm, Christoph and Tolxdorff, Thomas",\r\n booktitle = "Bildverarbeitung f{\\"u}r die Medizin 2018",\r\n year = "2018",\r\n month = "March",\r\n day = "10--12",\r\n publisher = "Springer",\r\n address = "Berlin, Heidelberg",\r\n pages = "297--302",\r\n isbn = "978-3-662-56537-7",\r\n doi = "10.1007/978-3-662-56537-7_79",\r\n springerlink = "https://link.springer.com/chapter/10.1007/978-3-662-56537-7_79",\r\n abstract = "Direct Volume Rendering (DVR) techniques are used to\r\n visualize surfaces from 3D volume data sets, without\r\n computing a 3D geometry. Several surfaces can be\r\n classified using a transfer function by assigning\r\n optical properties like color and opacity\r\n (RGB$\\alpha$) to the voxel data. Finding a good\r\n transfer function in order to separate specific\r\n structures from the volume data set, is in general a\r\n manual and time-consuming procedure, and requires\r\n detailed knowledge of the data and the image\r\n acquisition technique. In this paper, we present a\r\n new Virtual Reality (VR) application based on the\r\n HTC Vive headset. Onedimensional transfer functions\r\n can be designed in VR while continuously rendering\r\n the stereoscopic image pair through massively\r\n parallel GPUbased ray casting shader techniques. The\r\n usability of the VR application is evaluated.",\r\n}\r\n\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%% 2017\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n
\n
\n\n\n
\n Direct Volume Rendering (DVR) techniques are used to visualize surfaces from 3D volume data sets, without computing a 3D geometry. Several surfaces can be classified using a transfer function by assigning optical properties like color and opacity (RGB$α$) to the voxel data. Finding a good transfer function in order to separate specific structures from the volume data set, is in general a manual and time-consuming procedure, and requires detailed knowledge of the data and the image acquisition technique. In this paper, we present a new Virtual Reality (VR) application based on the HTC Vive headset. Onedimensional transfer functions can be designed in VR while continuously rendering the stereoscopic image pair through massively parallel GPUbased ray casting shader techniques. The usability of the VR application is evaluated.\n
\n\n\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Optimized KinectFusion Algorithm for 3D Scanning Applications.\n \n \n \n \n\n\n \n Alhwarin, F.; Schiffer, S.; Ferrein, A.; and Scholl, I.\n\n\n \n\n\n\n In Wiebe, S.; Gamboa, H.; Fred, A. L. N.; and i Badia, S. B., editor(s),
Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2018) , volume 2: BIOIMAGING, pages 50–57, 2018. SciTePress\n
Best Paper Candidate (Short Listed)\n\n
\n\n
\n\n
\n\n \n \n
scitepress\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{Alhwarin0FS18,\n author = {Faraj Alhwarin and Stefan Schiffer and Alexander Ferrein and Ingrid Scholl},\n editor = {Sheldon Wiebe and Hugo Gamboa and Ana L. N. Fred and Sergi Berm{\\'{u}}dez i Badia},\n title = {{Optimized KinectFusion Algorithm for 3D Scanning Applications}},\n booktitle = {Proceedings of the 11th International Joint Conference on Biomedical\n Engineering Systems and Technologies ({BIOSTEC} 2018) },\n volume = {2: BIOIMAGING},\n pages = {50--57},\n publisher = {SciTePress},\n isbn = {978-989-758-278-3},\n year = {2018},\n doi = {10.5220/0006594700500057},\n url_scitepress = {http://www.scitepress.org/PublicationsDetail.aspx?ID=dZs8lGPb760=&t=1},\n note = {Best Paper Candidate (Short Listed)},\n abstract = {KinectFusion is an effective way to reconstruct\n indoor scenes. It takes a depth image stream and\n uses the iterative closests point (ICP) method to\n estimate the camera motion. Then it merges the\n images in a volume to construct a 3D model. The\n model accuracy is not satisfactory for certain\n applications such as scanning a human body to\n provide information about bone structure health. For\n one reason, camera noise and noise in the ICP method\n limit the accuracy. For another, the error in\n estimating the global camera poses accumulates. In\n this paper, we present a method to optimize\n KinectFusion for 3D scanning in the above\n scenarios. We aim to reduce the noise influence on\n camera pose tracking. The idea is as follows: in our\n application scenarios we can always assume that\n either the camera rotates around the object to be\n scanned or that the object rotates in front of the\n camera. In both cases, the relative camera/object\n pose is located on a 3D-circle. Therefore, camera\n motion can be described as a rotation around a fixed\n axis passing through a fixed point. Since the axis\n and the center of rotation are always fixed, the\n error averaging principle can be utilized to reduce\n the noise impact and hence to enhance the 3D model\n accuracy of scanned object.},\n}\n\n
\n
\n\n\n
\n KinectFusion is an effective way to reconstruct indoor scenes. It takes a depth image stream and uses the iterative closests point (ICP) method to estimate the camera motion. Then it merges the images in a volume to construct a 3D model. The model accuracy is not satisfactory for certain applications such as scanning a human body to provide information about bone structure health. For one reason, camera noise and noise in the ICP method limit the accuracy. For another, the error in estimating the global camera poses accumulates. In this paper, we present a method to optimize KinectFusion for 3D scanning in the above scenarios. We aim to reduce the noise influence on camera pose tracking. The idea is as follows: in our application scenarios we can always assume that either the camera rotates around the object to be scanned or that the object rotates in front of the camera. In both cases, the relative camera/object pose is located on a 3D-circle. Therefore, camera motion can be described as a rotation around a fixed axis passing through a fixed point. Since the axis and the center of rotation are always fixed, the error averaging principle can be utilized to reduce the noise impact and hence to enhance the 3D model accuracy of scanned object.\n
\n\n\n
\n\n\n
\n
\n\n \n \n \n \n \n CRVM: Circular Random Variable-based Matcher - A Novel Hashing Method for Fast NN Search in High-dimensional Spaces.\n \n \n \n\n\n \n Alhwarin, F.; Ferrein, A.; and Scholl, I.\n\n\n \n\n\n\n In Marsico, M. D.; di Baja, G. S.; and Fred, A. L. N., editor(s),
Proceedings of the 7th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2018, pages 214–221, 2018. SciTePress\n
\n\n
\n\n
\n\n
\n\n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{AlhwarinFS18,\n author = {Faraj Alhwarin and Alexander Ferrein and Ingrid Scholl},\n title = {{CRVM:} Circular Random Variable-based Matcher - {A} Novel Hashing\n Method for Fast {NN} Search in High-dimensional Spaces},\n booktitle = {Proceedings of the 7th International Conference on Pattern Recognition\n Applications and Methods, {ICPRAM} 2018},\n pages = {214--221},\n year = {2018},\n editor = {Maria De Marsico and Gabriella Sanniti di Baja and Ana L. N. Fred},\n publisher = {SciTePress},\n year = {2018},\n isbn = {978-989-758-276-9},\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n