@article{lanzi_chatgpt_2023, title = {{ChatGPT} and {Other} {Large} {Language} {Models} as {Evolutionary} {Engines} for {Online} {Interactive} {Collaborative} {Game} {Design}}, url = {https://www.proquest.com/working-papers/chatgpt-other-large-language-models-as/docview/2784138957/se-2}, abstract = {Large language models (LLMs) have taken the scientific world by storm, changing the landscape of natural language processing and human-computer interaction. These powerful tools can answer complex questions and, surprisingly, perform challenging creative tasks (e.g., generate code and applications to solve problems, write stories, pieces of music, etc.). In this paper, we present a collaborative design framework that combines interactive evolution and large language models to simulate the typical human design process. We use the former to exploit users' feedback for selecting the most promising ideas and large language models for a very complex creative task -- the recombination and variation of ideas. In our framework, the process starts with a brief and a set of candidate designs, either generated using a language model or proposed by the users. Next, users collaborate on the design process by providing feedback to an interactive genetic algorithm that selects, recombines, and mutates the most promising designs. We evaluated our framework on three game design tasks with human designers who collaborated remotely.}, language = {English}, journal = {arXiv.org}, author = {Lanzi, Pier Luca and Loiacono, Daniele}, month = feb, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Artificial intelligence, Language, Artificial Intelligence, Machine Learning, Business And Economics--Banking And Finance, Human-Computer Interaction, Natural language processing, Feedback, Task complexity, Collaboration, Design, Genetic algorithms, Neural and Evolutionary Computation}, }
@article{prieto_investigating_2023, title = {Investigating the use of {ChatGPT} for the scheduling of construction projects}, url = {https://www.proquest.com/working-papers/investigating-use-chatgpt-scheduling-construction/docview/2774004436/se-2}, abstract = {Large language models such as ChatGPT have the potential to revolutionize the construction industry by automating repetitive and time-consuming tasks. This paper presents a study in which ChatGPT was used to generate a construction schedule for a simple construction project. The output from ChatGPT was evaluated by a pool of participants that provided feedback regarding their overall interaction experience and the quality of the output. The results show that ChatGPT can generate a coherent schedule that follows a logical approach to fulfill the requirements of the scope indicated. The participants had an overall positive interaction experience and indicated the great potential of such a tool to automate many preliminary and time-consuming tasks. However, the technology still has limitations, and further development is needed before it can be widely adopted in the industry. Overall, this study highlights the potential of using large language models in the construction industry and the need for further research.}, language = {English}, journal = {arXiv.org}, author = {Prieto, Samuel A and Mengiste, Eyob T and García de Soto, Borja}, month = jan, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Automation, Chatbots, Artificial Intelligence, Business And Economics--Banking And Finance, Human-Computer Interaction, Construction industry, Schedules, Scheduling}, }
@article{krugel_moral_2023, title = {The moral authority of {ChatGPT}}, url = {https://www.proquest.com/working-papers/moral-authority-chatgpt/docview/2766880610/se-2}, abstract = {ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it might improve the moral judgment and decisions of users, who often hold contradictory moral beliefs. Unfortunately, ChatGPT turns out highly inconsistent as a moral advisor. Nonetheless, it influences users' moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT threatens to corrupt rather than improves users' judgment. These findings raise the question of how to ensure the responsible use of ChatGPT and similar AI. Transparency is often touted but seems ineffective. We propose training to improve digital literacy.}, language = {English}, journal = {arXiv.org}, author = {Krügel, Sebastian and Ostermaier, Andreas and Uhl, Matthias}, month = jan, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Chatbots, Artificial Intelligence, Machine Learning, Business And Economics--Banking And Finance, Questions, Computers and Society, Human-Computer Interaction}, }
@article{maddigan_chat2vis_2023, title = {{Chat2VIS}: {Generating} {Data} {Visualisations} via {Natural} {Language} using {ChatGPT}, {Codex} and {GPT}-3 {Large} {Language} {Models}}, url = {https://www.proquest.com/working-papers/chat2vis-generating-data-visualisations-via/docview/2774005931/se-2}, abstract = {The field of data visualisation has long aimed to devise solutions for generating visualisations directly from natural language text. Research in Natural Language Interfaces (NLIs) has contributed towards the development of such techniques. However, the implementation of workable NLIs has always been challenging due to the inherent ambiguity of natural language, as well as in consequence of unclear and poorly written user queries which pose problems for existing language models in discerning user intent. Instead of pursuing the usual path of developing new iterations of language models, this study uniquely proposes leveraging the advancements in pre-trained large language models (LLMs) such as ChatGPT and GPT-3 to convert free-form natural language directly into code for appropriate visualisations. This paper presents a novel system, Chat2VIS, which takes advantage of the capabilities of LLMs and demonstrates how, with effective prompt engineering, the complex problem of language understanding can be solved more efficiently, resulting in simpler and more accurate end-to-end solutions than prior approaches. Chat2VIS shows that LLMs together with the proposed prompts offer a reliable approach to rendering visualisations from natural language queries, even when queries are highly misspecified and underspecified. This solution also presents a significant reduction in costs for the development of NLI systems, while attaining greater visualisation inference abilities compared to traditional NLP approaches that use hand-crafted grammar rules and tailored models. This study also presents how LLM prompts can be constructed in a way that preserves data security and privacy while being generalisable to different datasets. This work compares the performance of GPT-3, Codex and ChatGPT across a number of case studies and contrasts the performances with prior studies.}, language = {English}, journal = {arXiv.org}, author = {Maddigan, Paula and {Teo Susnjak}}, month = feb, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Language, Chatbots, Business And Economics--Banking And Finance, Human-Computer Interaction, Natural language processing, Natural language, Free form, Priming, Queries, Scientific visualization, Visualization}, }
@misc{chumbalov_fast_2023, title = {Fast {Interactive} {Search} with a {Scale}-{Free} {Comparison} {Oracle}}, url = {http://arxiv.org/abs/2306.01814}, doi = {10.48550/arXiv.2306.01814}, abstract = {A comparison-based search algorithm lets a user find a target item \$t\$ in a database by answering queries of the form, ``Which of items \$i\$ and \$j\$ is closer to \$t\$?'' Instead of formulating an explicit query (such as one or several keywords), the user navigates towards the target via a sequence of such (typically noisy) queries. We propose a scale-free probabilistic oracle model called \${\textbackslash}gamma\$-CKL for such similarity triplets \$(i,j;t)\$, which generalizes the CKL triplet model proposed in the literature. The generalization affords independent control over the discriminating power of the oracle and the dimension of the feature space containing the items. We develop a search algorithm with provably exponential rate of convergence under the \${\textbackslash}gamma\$-CKL oracle, thanks to a backtracking strategy that deals with the unavoidable errors in updating the belief region around the target. We evaluate the performance of the algorithm both over the posited oracle and over several real-world triplet datasets. We also report on a comprehensive user study, where human subjects navigate a database of face portraits.}, urldate = {2023-06-25}, publisher = {arXiv}, author = {Chumbalov, Daniyar and Klein, Lars and Maystre, Lucas and Grossglauser, Matthias}, month = jun, year = {2023}, note = {arXiv:2306.01814 [cs]}, keywords = {human-computer interaction, information retrieval, machine learning, mentions sympy}, }
@article{braunschweig_five_2023, title = {The {Five} {Barriers} to {Artificial} {Intelligence}}, issn = {0337-307X}, url = {https://www.proquest.com/scholarly-journals/five-barriers-artificial-intelligence/docview/2783968819/se-2?accountid=14542}, abstract = {The launch in late November 2022 of ChatGPT, a conversational agent (chatbot) developed by OpenAI, received a great deal of attention and has prompted a lot of reporting and commentary on the performance of, and advances made by, artificial intelligence (AI). And yet, though progress in this area is indeed impressive, a number of limits remain and the emergence of a totally autonomous, perfect AI still lies in the realms of science-fiction. For example, Bertrand Braunschweig, in charge of the scientific coordination of the French programme 'Confiance.ai', points here to five major 'barriers' that could well hinder its development if insufficient attention is paid to them. He writes of these here: they have to do with trust in AI, its energy consumption, the safety of the systems controlled by AI, human-machine interactions and, lastly, the inhumanity of machines. In closing, he suggests a number of lines of work and research for meeting the challenges posed by the five barriers: improving network architecture, combining digital and symbolic models, increased interdisciplinarity and other measures.}, language = {English}, number = {453}, journal = {Futuribles}, author = {Braunschweig, Bertrand}, month = apr, year = {2023}, note = {Place: Paris Publisher: Futuribles}, keywords = {Artificial intelligence, Attention, Barriers, Safety, Coordination, Energy consumption, Fiction, Human-computer interaction, Social Sciences: Comprehensive Works}, pages = {1}, }
@article{kocaballi_conversational_2023, title = {Conversational {AI}-{Powered} {Design}: {ChatGPT} as {Designer}, {User}, and {Product}}, url = {https://www.proquest.com/working-papers/conversational-ai-powered-design-chatgpt-as/docview/2777167650/se-2}, abstract = {The recent advancements in Large Language Models (LLMs), particularly conversational LLMs like ChatGPT, have prompted changes in a range of fields, including design. This study aims to examine the capabilities of ChatGPT in a human-centered design process. To this end, a hypothetical design project was conducted, where ChatGPT was utilized to generate personas, simulate interviews with fictional users, create new design ideas, simulate usage scenarios and conversations between an imaginary prototype and fictional users, and lastly evaluate user experience. The results show that ChatGPT effectively performed the tasks assigned to it as a designer, user, or product, providing mostly appropriate responses. The study does, however, highlight some drawbacks such as forgotten information, partial responses, and a lack of output diversity. The paper explains the potential benefits and limitations of using conversational LLMs in design, discusses its implications, and suggests directions for future research in this rapidly evolving area.}, language = {English}, journal = {arXiv.org}, author = {Kocaballi, A Baki}, month = feb, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Chatbots, Artificial Intelligence, Business And Economics--Banking And Finance, Human-Computer Interaction, User experience}, }
@article{jalil_chatgpt_2023, title = {{ChatGPT} and {Software} {Testing} {Education}: {Promises} \& {Perils}}, url = {https://www.proquest.com/working-papers/chatgpt-software-testing-education-promises-amp/docview/2774362357/se-2}, abstract = {Over the past decade, predictive language modeling for code has proven to be a valuable tool for enabling new forms of automation for developers. More recently, we have seen the advent of general purpose "large language models", based on neural transformer architectures, that have been trained on massive datasets of human written text spanning code and natural language. However, despite the demonstrated representational power of such models, interacting with them has historically been constrained to specific task settings, limiting their general applicability. Many of these limitations were recently overcome with the introduction of ChatGPT, a language model created by OpenAI and trained to operate as a conversational agent, enabling it to answer questions and respond to a wide variety of commands from end users. The introduction of models, such as ChatGPT, has already spurred fervent discussion from educators, ranging from fear that students could use these AI tools to circumvent learning, to excitement about the new types of learning opportunities that they might unlock. However, given the nascent nature of these tools, we currently lack fundamental knowledge related to how well they perform in different educational settings, and the potential promise (or danger) that they might pose to traditional forms of instruction. As such, in this paper, we examine how well ChatGPT performs when tasked with answering common questions in a popular software testing curriculum. Our findings indicate that ChatGPT can provide correct or partially correct answers in 55.6\% of cases, provide correct or partially correct explanations of answers in 53.0\% of cases, and that prompting the tool in a shared question context leads to a marginally higher rate of correct responses. Based on these findings, we discuss the potential promises and perils related to the use of ChatGPT by students and instructors.}, language = {English}, journal = {arXiv.org}, author = {Jalil, Sajed and {Suzzana Rafi} and LaToza, Thomas D and Moran, Kevin and Lam, Wing}, month = mar, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Chatbots, Learning, Business And Economics--Banking And Finance, Questions, Education, Human-Computer Interaction, Massive data points, Natural language processing, Software Engineering, Software testing, Students}, }
@article{luan_exploring_2023, title = {Exploring the {Cognitive} {Dynamics} of {Artificial} {Intelligence} in the {Post}-{COVID}-19 and {Learning} 3.0 {Era}: {A} {Case} {Study} of {ChatGPT}}, url = {https://www.proquest.com/working-papers/exploring-cognitive-dynamics-artificial/docview/2775126380/se-2}, abstract = {The emergence of artificial intelligence has incited a paradigm shift across the spectrum of human endeavors, with ChatGPT serving as a catalyst for the transformation of various established domains, including but not limited to education, journalism, security, and ethics. In the post-pandemic era, the widespread adoption of remote work has prompted the educational sector to reassess conventional pedagogical methods. This paper is to scrutinize the underlying psychological principles of ChatGPT, delve into the factors that captivate user attention, and implicate its ramifications on the future of learning. The ultimate objective of this study is to instigate a scholarly discourse on the interplay between technological advancements in education and the evolution of human learning patterns, raising the question of whether technology is driving human evolution or vice versa.}, language = {English}, journal = {arXiv.org}, author = {Luan, Lingfei and Lin, Xi and Li, Wenbiao}, month = feb, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Artificial intelligence, Evolution, Chatbots, Learning, Business And Economics--Banking And Finance, Computers and Society, Computation and Language, Education, Human-Computer Interaction}, }
@article{pardos_learning_2023, title = {Learning gain differences between {ChatGPT} and human tutor generated algebra hints}, url = {https://www.proquest.com/working-papers/learning-gain-differences-between-chatgpt-human/docview/2776860659/se-2}, abstract = {Large Language Models (LLMs), such as ChatGPT, are quickly advancing AI to the frontiers of practical consumer use and leading industries to re-evaluate how they allocate resources for content production. Authoring of open educational resources and hint content within adaptive tutoring systems is labor intensive. Should LLMs like ChatGPT produce educational content on par with human-authored content, the implications would be significant for further scaling of computer tutoring system approaches. In this paper, we conduct the first learning gain evaluation of ChatGPT by comparing the efficacy of its hints with hints authored by human tutors with 77 participants across two algebra topic areas, Elementary Algebra and Intermediate Algebra. We find that 70\% of hints produced by ChatGPT passed our manual quality checks and that both human and ChatGPT conditions produced positive learning gains. However, gains were only statistically significant for human tutor created hints. Learning gains from human-created hints were substantially and statistically significantly higher than ChatGPT hints in both topic areas, though ChatGPT participants in the Intermediate Algebra experiment were near ceiling and not even with the control at pre-test. We discuss the limitations of our study and suggest several future directions for the field. Problem and hint content used in the experiment is provided for replicability.}, language = {English}, journal = {arXiv.org}, author = {Pardos, Zachary A and Bhandari, Shreya}, month = feb, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Chatbots, Learning, Business And Economics--Banking And Finance, Computers and Society, Computation and Language, Education, Human-Computer Interaction, Adaptive systems, Algebra, Tutoring}, }
@article{basic_better_2023, title = {Better by you, better than me, chatgpt3 as writing assistance in students essays}, url = {https://www.proquest.com/working-papers/better-you-than-me-chatgpt3-as-writing-assistance/docview/2775126519/se-2}, abstract = {Aim: To compare students' essay writing performance with or without employing ChatGPT-3 as a writing assistant tool. Materials and methods: Eighteen students participated in the study (nine in control and nine in the experimental group that used ChatGPT-3). We scored essay elements with grades (A-D) and corresponding numerical values (4-1). We compared essay scores to students' GPTs, writing time, authenticity, and content similarity. Results: Average grade was C for both groups; for control (2.39, SD=0.71) and for experimental (2.00, SD=0.73). None of the predictors affected essay scores: group (P=0.184), writing duration (P=0.669), module (P=0.388), and GPA (P=0.532). The text unauthenticity was slightly higher in the experimental group (11.87\%, SD=13.45 to 9.96\%, SD=9.81\%), but the similarity among essays was generally low in the overall sample (the Jaccard similarity index ranging from 0 to 0.054). In the experimental group, AI classifier recognized more potential AI-generated texts. Conclusions: This study found no evidence that using GPT as a writing tool improves essay quality since the control group outperformed the experimental group in most parameters.}, language = {English}, journal = {arXiv.org}, author = {Basic, Zeljana and Banovac, Ana and Kruzic, Ivana and Jerkovic, Ivan}, month = feb, year = {2023}, note = {Place: Ithaca Publisher: Cornell University Library, arXiv.org}, keywords = {Writing, Chatbots, Artificial Intelligence, Business And Economics--Banking And Finance, Computers and Society, Human-Computer Interaction, Students, Essays, Similarity}, }
@misc{chulpongsatorn_augmented_2023, title = {Augmented {Math}: {Authoring} {AR}-{Based} {Explorable} {Explanations} by {Augmenting} {Static} {Math} {Textbooks}}, shorttitle = {Augmented {Math}}, url = {http://arxiv.org/abs/2307.16112}, doi = {10.1145/3586183.3606827}, abstract = {We introduce Augmented Math, a machine learning-based approach to authoring AR explorable explanations by augmenting static math textbooks without programming. To augment a static document, our system first extracts mathematical formulas and figures from a given document using optical character recognition (OCR) and computer vision. By binding and manipulating these extracted contents, the user can see the interactive animation overlaid onto the document through mobile AR interfaces. This empowers non-technical users, such as teachers or students, to transform existing math textbooks and handouts into on-demand and personalized explorable explanations. To design our system, we first analyzed existing explorable math explanations to identify common design strategies. Based on the findings, we developed a set of augmentation techniques that can be automatically generated based on the extracted content, which are 1) dynamic values, 2) interactive figures, 3) relationship highlights, 4) concrete examples, and 5) step-by-step hints. To evaluate our system, we conduct two user studies: preliminary user testing and expert interviews. The study results confirm that our system allows more engaging experiences for learning math concepts.}, urldate = {2023-08-03}, author = {Chulpongsatorn, Neil and Lunding, Mille Skovhus and Soni, Nishan and Suzuki, Ryo}, month = jul, year = {2023}, note = {arXiv:2307.16112 [cs]}, keywords = {augmented reality, computer vision, human-computer interaction, mentions sympy}, }
@article{nov_putting_2023, title = {Putting {ChatGPT}'s {Medical} {Advice} to the ({Turing}) {Test}}, url = {https://www.proquest.com/working-papers/putting-chatgpts-medical-advice-turing-test/docview/2768841735/se-2}, doi = {10.1101/2023.01.23.23284735}, abstract = {Importance: Chatbots could play a role in answering patient questions, but patients' ability to distinguish between provider and chatbot responses, and patients' trust in chatbots' functions are not well established. Objective: To assess the feasibility of using ChatGPT or a similar AI-based chatbot for patient-provider communication. Design: Survey in January 2023 Participants: A US representative sample of 430 study participants aged 18 and above was recruited on Prolific, a crowdsourcing platform for academic studies. 426 participants filled out the full survey. After removing participants who spent less than 3 minutes on the survey, 392 respondents remained. 53.2\% of respondents analyzed were women; their average age was 47.1. Exposure(s): Ten representative non-administrative patient-provider interactions were extracted from the EHR. Patients' questions were placed in ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider's response. In the survey, each patient's question was followed by a provider- or ChatGPT-generated response. Participants were informed that five responses were provider-generated and five were chatbot-generated. Participants were asked, and incentivized financially, to correctly identify the response source. Participants were also asked about their trust in chatbots' functions in patient-provider communication, using a Likert scale of 1-5. Main Outcome(s) and Measure(s): Main outcome: Proportion of responses correctly classified as provider- vs chatbot-generated. Secondary outcomes: Average and standard deviation of responses to trust questions. Results: The correct classification of responses ranged between 49.0\% to 85.7\% for different questions. On average, chatbot responses were correctly identified 65.5\% of the time, and provider responses were correctly distinguished 65.1\% of the time. On average, responses toward patients' trust in chatbots' functions were weakly positive (mean Likert score: 3.4), with lower trust as the health-related complexity of the task in questions increased. Conclusions and Relevance: ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower risk health questions. It is important to continue studying patient-chatbot interaction as chatbots move from administrative to more clinical roles in healthcare. Keywords: AI in Medicine; ChatGPT; Generative AI; Healthcare AI; Turing Test}, language = {English}, journal = {MedRxiv}, author = {Nov, Oded and Singh, Nina and Mann, Devin M}, month = jan, year = {2023}, note = {Place: Cold Spring Harbor Publisher: Cold Spring Harbor Laboratory Press}, keywords = {Chatbots, Health care, Medical Sciences, Patients, Business And Economics--Banking And Finance, Questions, Human-Computer Interaction, Surveys, Computation, Logic}, }
@inproceedings{sowinska_foot-based_2023, address = {New York, NY, USA}, series = {{CHI} {PLAY} {Companion} '23}, title = {Foot-{Based} {Game} {Controller} to {Improve} {Interaction} between {Participants}}, isbn = {9798400700293}, url = {https://doi.org/10.1145/3573382.3616098}, doi = {10.1145/3573382.3616098}, abstract = {Trying to improve the international experience of studying abroad, we focused on a solution that would foster social interactions among students. We created a prototype of a game controller employing interactive floor and vertical display, which would allow players to use their natural body movements and interact with each other during the game. We have designed a high-fidelity prototype of a foot controller for a two-person game and have made a preliminary evaluation of the system.}, booktitle = {Companion {Proceedings} of the {Annual} {Symposium} on {Computer}-{Human} {Interaction} in {Play}}, publisher = {Association for Computing Machinery}, author = {Sowińska, Oliwia and Kubczak, Anna and Dominiak, Julia and Walczak, Natalia and Babout, Laurent}, year = {2023}, note = {event-place: Stratford, ON, Canada}, keywords = {Foot-based interaction, Game controller, Human-computer interaction, Interactive surface, Social interaction, Velostat}, pages = {196--201}, }
@misc{bae_computational_2023, title = {A {Computational} {Design} {Process} to {Fabricate} {Sensing} {Network} {Physicalizations}}, url = {http://arxiv.org/abs/2308.04714}, doi = {10.48550/arXiv.2308.04714}, abstract = {Interaction is critical for data analysis and sensemaking. However, designing interactive physicalizations is challenging as it requires cross-disciplinary knowledge in visualization, fabrication, and electronics. Interactive physicalizations are typically produced in an unstructured manner, resulting in unique solutions for a specific dataset, problem, or interaction that cannot be easily extended or adapted to new scenarios or future physicalizations. To mitigate these challenges, we introduce a computational design pipeline to 3D print network physicalizations with integrated sensing capabilities. Networks are ubiquitous, yet their complex geometry also requires significant engineering considerations to provide intuitive, effective interactions for exploration. Using our pipeline, designers can readily produce network physicalizations supporting selection-the most critical atomic operation for interaction-by touch through capacitive sensing and computational inference. Our computational design pipeline introduces a new design paradigm by concurrently considering the form and interactivity of a physicalization into one cohesive fabrication workflow. We evaluate our approach using (i) computational evaluations, (ii) three usage scenarios focusing on general visualization tasks, and (iii) expert interviews. The design paradigm introduced by our pipeline can lower barriers to physicalization research, creation, and adoption.}, urldate = {2023-08-12}, publisher = {arXiv}, author = {Bae, S. Sandra and Fujiwara, Takanori and Ynnerman, Anders and Do, Ellen Yi-Luen and Rivera, Michael L. and Szafir, Danielle Albers}, month = aug, year = {2023}, note = {arXiv:2308.04714 [cs]}, keywords = {3D printing, design automation, human-computer interaction, mentions sympy, physicalization, tangible interfaces}, }
@article{veitch_systematic_2022, title = {A systematic review of human-{AI} interaction in autonomous ship systems}, volume = {152}, issn = {0925-7535}, shorttitle = {对自主船系统中人类互动的系统评价}, url = {https://www.sciencedirect.com/science/article/pii/S0925753522001175}, doi = {10.1016/j.ssci.2022.105778}, abstract = {Automation is increasing in shipping. Advancements in Artificial Intelligence (AI) applications like collision avoidance and computer vision have the potential to augment or take over the roles of ship navigators. However, implementation of AI technologies may also jeopardize safety if done in a way that reduces human control. In this systematic review, we included 42 studies about human supervision and control of autonomous ships. We addressed three research questions (a) how is human control currently being adopted in autonomous ship systems? (b) what methods, approaches, and theories are being used to address safety concerns and design challenges? and (c) what research gaps, regulatory obstacles, and technical shortcomings represent the most significant barriers to their implementation? We found that (1) human operators have an active role in ensuring autonomous ship safety above and beyond a backup role, (2) System-Theoretic Process Analysis and Bayesian Networks are the most common risk assessment tools in risk-based design, and (3) the new role of shore control center operators will require new competencies and training. The field of autonomous ship research is growing quickly. New risks are emerging from increasing interaction with AI systems in safety–critical systems, underscoring new research questions. Effective human-AI interaction design is predicated on increased cross-disciplinary efforts, requiring reconciling productivity with safety (resilience), technical limitations with human abilities and expectations (interaction design), and machine task autonomy with human supervisory control (safety management). 【摘要翻译】自动化在运输方面正在增加。人工智能(AI)应用程序(例如避免碰撞和计算机视觉)的进步有可能增加或接管船舶导航员的角色。但是,如果以减少人类控制的方式完成AI技术也可能会危害安全性。在这项系统的综述中,我们包括了42项有关人类监督和自治船只控制的研究。我们解决了三个研究问题(a)自动船系统中目前如何采用人类控制? (b)正在使用哪些方法,方法和理论来应对安全问题和设计挑战? (c)哪些研究差距,法规障碍和技术缺陷是其实施的最重要障碍?我们发现(1)人类操作员在确保自动船舶安全性超出备份角色方面具有积极作用,(2)系统理论过程分析和贝叶斯网络是基于风险的设计中最常见的风险评估工具,以及( 3)Shore Control Center运营商的新角色将需要新的能力和培训。自动船研究领域正在迅速增长。从与安全 - 关键系统中的AI系统相互作用的增加,引起了新的风险,强调了新的研究问题。有效的人类互动设计是基于增加跨学科的工作,需要将生产率与安全性(弹性)调和,技术限制与人类能力和期望(互动设计)以及与人类监督控制(安全管理)的机器任务自治。}, language = {en}, urldate = {2022-12-08}, journal = {Safety Science}, author = {Veitch, Erik and Andreas Alsos, Ole}, month = aug, year = {2022}, keywords = {/unread, Artificial Intelligence, Automation, Bayesian Networks, Human-Computer Interaction, Interaction Design, Marine Navigation, Maritime Autonomous Surface Ships, Resilience Engineering, STPA, Safety, Safety management, Work}, pages = {105778}, }
@inproceedings{10.1145/3502178.3529111, author = {Jeanneret Medina, Maximiliano and Lalanne, Denis and Baudet, C\'{e}dric}, title = {Human-Computer Interaction in Artificial Intelligence for Blind and Vision Impairment: An Interpretative Literature Review Based on Bibliometrics: L’interaction humain-machine en intelligence artificielle pour les aveugles et d\'{e}ficients visuels : Une revue de litt\'{e}rature interpr\'{e}tative fond\'{e}e sur la bibliom\'{e}trie}, year = {2022}, isbn = {9781450391986}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3502178.3529111}, doi = {10.1145/3502178.3529111}, abstract = {The rise of artificial intelligence and particularly machine learning conduct to an emerging landscape of intelligent interactive systems. Such technologies help clinicians to detect diseases from medical imaging, and allow to describe the visual world to people with visual impairment. However, this new technological landscape comes with a set of HCI challenges. To better understand the importance of HCI in AI, we focused on blind and vision impairment as a representative application domain. Using bibliometric techniques, we retained 187 scientific publications organized in three clusters. Our findings show that HCI is absent in research related to medical computer systems but has moderate importance when the aim is to assist BVI in their daily life.}, booktitle = {Adjunct Proceedings of the 33rd Conference on l'Interaction Humain-Machine}, articleno = {3}, numpages = {6}, keywords = {Artificial Intelligence, Bibliom\'{e}trie, Bibliometrics, Blind, D\'{e}ficience Visuelle, Human-Computer Interaction, Intelligence Artificielle, Interaction-Humain Machine, Review, Revue, Visual Impairment}, location = {Namur, Belgium}, series = {IHM '22 Adjunct} }
@inproceedings{uea81726, address = {CZE}, author = {Salvador Medina and Sarah Taylor and Mark Tiede and Alexander Hauptmann and Iain Matthews}, title = {Importance of parasagittal sensor information in tongue motion capture through a diphonic analysis}, journal = {Interspeech 2021}, year = {2021}, booktitle = {Interspeech 2021}, doi = {10.21437/Interspeech.2021-1732}, pages = {3340--3344}, abstract = {Our study examines the information obtained by adding two parasagittal sensors to the standard midsagittal configuration of an Electromagnetic Articulography (EMA) observation of lingual articulation. In this work, we present a large and phonetically balanced corpus obtained from an EMA recording session of a single English native speaker reading 1899 sentences from the Harvard and TIMIT corpora. According to a statistical analysis of the diphones produced during the recording session, the motion captured by the parasagittal sensors has a low correlation to the midsagittal sensors in the mediolateral direction. We perform a geometric analysis of the lateral tongue by the measure of its width and using a proxy of the tongue?s curvature that is computed using the Menger curvature. To provide a better understanding of the tongue sensor motion we present dynamic visualizations of all diphones. Finally, we present a summary of the velocity information computed from the tongue sensor information.}, url = {https://ueaeprints.uea.ac.uk/id/eprint/81726/}, keywords = {tongue,parasagittal,electromagnetic articulography,articulatory analysis,articulatory analysis,electromagnetic articulography,ema,software,signal processing,language and linguistics,human-computer interaction,modelling and simulation ,/dk/atira/pure/subjectarea/asjc/1700/1712} }
@article{vinanzi_collaborative_2021, title = {The collaborative mind: intention reading and trust in human-robot interaction}, volume = {24}, issn = {2589-0042}, shorttitle = {The collaborative mind}, url = {https://www.sciencedirect.com/science/article/pii/S2589004221000985}, doi = {10.1016/j.isci.2021.102130}, abstract = {Robots are likely to become important social actors in our future and so require more human-like ways of assisting us. We state that collaboration between humans and robots is fostered by two cognitive skills: intention reading and trust. An agent possessing these abilities would be able to infer the non-verbal intentions of others and to evaluate how likely they are to achieve their goals, jointly understanding what kind and which degree of collaboration they require. For this reason, we propose a developmental artificial cognitive architecture that integrates unsupervised machine learning and probabilistic models to imbue a humanoid robot with intention reading and trusting capabilities. Our experimental results show that the synergistic implementation of these cognitive skills enable the robot to cooperate in a meaningful way, with the intention reading model allowing a correct goal prediction and with the trust component enhancing the likelihood of a positive outcome for the task.}, language = {en}, number = {2}, urldate = {2023-03-08}, journal = {iScience}, author = {Vinanzi, Samuele and Cangelosi, Angelo and Goerick, Christian}, month = feb, year = {2021}, keywords = {Artificial Intelligence, Human-Centered Computing, Human-Computer Interaction}, pages = {102130}, }
@article{uea81506, volume = {2021}, number = {10}, journal = {Electronic Imaging}, month = {January}, title = {Virtual adversarial training in feature space to improve unsupervised video domain adaptation}, doi = {10.2352/ISSN.2470-1173.2021.10.IPAS-258}, year = {2021}, pages = {258--1-258-6}, issn = {2470-1173}, abstract = {Virtual Adversarial Training has recently seen a lot of success in semi-supervised learning, as well as unsupervised Domain Adaptation. However, so far it has been used on input samples in the pixel space, whereas we propose to apply it directly to feature vectors. We also discuss the unstable behaviour of entropy minimization and Decision-Boundary Iterative Refinement Training With a Teacher in Domain Adaptation, and suggest substitutes that achieve similar behaviour. By adding the aforementioned techniques to the state of the art model TA3N, we either maintain competitive results or outperform prior art in multiple unsupervised video Domain Adaptation tasks.}, author = {Gorpincenko, Artjoms and French, Geoffrey and Mackiewicz, Michal}, keywords = {computer graphics and computer-aided design,computer science applications,human-computer interaction,software,electrical and electronic engineering,atomic and molecular physics, and optics ,/dk/atira/pure/subjectarea/asjc/1700/1704}, url = {https://ueaeprints.uea.ac.uk/id/eprint/81506/} }
@inproceedings{heljakka_playing_2020, address = {New York, NY, USA}, series = {{CHI} {PLAY} '20}, title = {Playing with the {Opposite} of {Uncanny}: {Empathic} {Responses} to {Learning} with a {Companion}-{Technology} {Robot} {Dog} vs. {Real} {Dog}}, isbn = {978-1-4503-7587-0}, url = {https://doi.org/10.1145/3383668.3419900}, doi = {10.1145/3383668.3419900}, abstract = {Social robots are becoming increasingly common in the contexts of education and healthcare. This paper reports on the findings of the first stage of an exploratory study conducted with (n=16) Finnish preschoolers aged 5-7 years. The multidisciplinary study intertwining the areas of early education pedagogics, smart toys and interactive technologies, employed both a commercial robot dog and a real dog to study the potential of these artificial and living entities to support and facilitate social-emotional learning (SEL) through a guided playful learning approach. We performed a research intervention including facilitation, observation and video- recordings of three play sessions organized in March-May 2020. The preliminary findings indicate how guided playing with the robot dog supported SEL through conversation about human relationships, while interaction with the real dog facilitated empathic responses through spontaneous reactions on the animal's behavior. The contribution of our research is an understanding of that a robotic dog more than a living dog may assist in simulating human interaction more than human- animal interaction and is in this way suitable to support playful learning of social-emotional competencies.}, booktitle = {Extended {Abstracts} of the 2020 {Annual} {Symposium} on {Computer}-{Human} {Interaction} in {Play}}, publisher = {Association for Computing Machinery}, author = {Heljakka, Katriina Irja and Ihamäki, Pirita Johanna and Lamminen, Anu Inkeri}, year = {2020}, note = {event-place: Virtual Event, Canada}, keywords = {child-robot interaction, emotional intelligence, human-animal interaction, human-computer interaction, playful learning, robot toys, social robotics}, pages = {262--266}, }
@inproceedings{uea75123, year = {2020}, doi = {10.1145/3388176.3388183}, month = {April}, author = {Benjamin Strickson and Beatriz De La Iglesia}, title = {Legal Judgement Prediction for UK Courts}, pages = {204--209}, abstract = {Legal Judgement Prediction (LJP) is the task of automatically predicting the outcome of a court case given only the case document. During the last five years researchers have successfully attempted this task for the supreme courts of three jurisdictions: the European Union, France, and China. Motivation includes the many real world applications including: a prediction system that can be used at the judgement drafting stage, and the identification of the most important words and phrases within a judgement. The aim of our research was to build, for the first time, an LJP model for UK court cases. This required the creation of a labelled data set of UK court judgements and the subsequent application of machine learning models. We evaluated different feature representations and different algorithms. Our best performing model achieved: 69.05\% accuracy and 69.02 F1 score. We demonstrate that LJP is a promising area of further research for UK courts by achieving high model performance and the ability to easily extract useful features.}, url = {https://ueaeprints.uea.ac.uk/id/eprint/75123/}, keywords = {legal judgement prediction,feature extraction,legal calculus,human-computer interaction,computer networks and communications,computer vision and pattern recognition,software ,/dk/atira/pure/subjectarea/asjc/1700/1709} }
@article{schulz_computational_2020, title = {Computational {Psychiatry} for {Computers}}, volume = {23}, issn = {2589-0042}, url = {https://www.sciencedirect.com/science/article/pii/S258900422030969X}, doi = {10.1016/j.isci.2020.101772}, abstract = {Computational psychiatry is a nascent field that attempts to use multi-level analyses of the underlying computational problems that we face in navigating a complex, uncertain and changing world to illuminate mental dysfunction and disease. Two particular foci of the field are the costs and benefits of environmental adaptivity and the danger and necessity of heuristics. Here, we examine the extent to which these foci and others can be used to study the actual and potential flaws of the artificial computational devices that we are increasingly inventing and empowering to navigate this very same environment on our behalf.}, language = {en}, number = {12}, urldate = {2023-03-08}, journal = {iScience}, author = {Schulz, Eric and Dayan, Peter}, month = dec, year = {2020}, keywords = {Computer Science, Human-Computer Interaction, Psychology}, pages = {101772}, }
@inproceedings{uea73655, address = {USA}, month = {October}, author = {Graham Finlayson and Yuteng Zhu}, title = {An improved optimization method for finding a color filter to make a camera more colorimetric}, journal = {Electronic Imaging 2020}, year = {2020}, booktitle = {Electronic Imaging 2020}, doi = {10.2352/ISSN.2470-1173.2020.15.COLOR-163}, pages = {163--1-163-6}, abstract = {Recently, an iterative optimization method was proposed that determines the spectral transmittance of a color filter which, when placed in front of a camera, makes the camera more colorimetric [1]. However, the performance of this method depends strongly on the filter (guess) that initializes the optimization. In this paper, we develop a simple extension to the optimization where we systematically sample the set of possible initial filters and for each initialization solve for the best refinement. Experiments demonstrate that improving the initialization step can result in the effective ?camera+filter? imaging system being much more colorimetric. Moreover, the filters we design are smoother than previously reported (which makes them easier to manufacture).}, url = {https://ueaeprints.uea.ac.uk/id/eprint/73655/}, keywords = {filter design,optimisation,colorimetric,sampling method,software,atomic and molecular physics, and optics,human-computer interaction,electrical and electronic engineering,computer science applications,computer graphics and computer-aided design ,/dk/atira/pure/subjectarea/asjc/1700/1712} }
@inproceedings{inal_perspectives_2020, address = {New York, NY, USA}, series = {{NordiCHI} '20}, title = {Perspectives and {Practices} of {Digital} {Accessibility}: {A} {Survey} of {User} {Experience} {Professionals} in {Nordic} {Countries}}, isbn = {978-1-4503-7579-5}, shorttitle = {Perspectives and {Practices} of {Digital} {Accessibility}}, url = {https://doi.org/10.1145/3419249.3420119}, doi = {10.1145/3419249.3420119}, abstract = {User experience (UX) professionals are key actors in promoting inclusion in the digital society. They are responsible for ensuring that web pages and digital services are in line with regulatory frameworks and that digital accessibility for all is incorporated into their designs. Still, there are few dedicated professionals that specialize only in accessibility. In this paper, we explore how UX professionals in Nordic countries view and practice digital accessibility. We collected data from 167 UX professionals in Denmark, Finland, Norway, and Sweden using an online survey. Our results show that, generally, the UX professionals consider digital accessibility to be important and their organizations include accessibility in their projects. However, they spend limited work time on accessibility issues and have limited knowledge about accessibility guidelines and standards. Their main challenges in creating accessible systems are related to time constraints, lack of training, and cost.}, urldate = {2021-04-13}, booktitle = {Proceedings of the 11th {Nordic} {Conference} on {Human}-{Computer} {Interaction}: {Shaping} {Experiences}, {Shaping} {Society}}, publisher = {Association for Computing Machinery}, author = {Inal, Yavuz and Guribye, Frode and Rajanen, Dorina and Rajanen, Mikko and Rost, Mattias}, month = oct, year = {2020}, keywords = {Accessibility Evaluation, Digital Accessibility, Human-Computer Interaction, UX, User Experience, User Experience Professionals, Web Accessibility}, pages = {1--11}, }
@article{Kwon2020, abstract = {Background: Diet-tracking mobile apps have gained increased interest from both academic and clinical fields. However, quantity-focused diet tracking (eg, calorie counting) can be time-consuming and tedious, leading to unsustained adoption. Diet quality—focusing on high-quality dietary patterns rather than quantifying diet into calories—has shown effectiveness in improving heart disease risk. The Healthy Heart Score (HHS) predicts 20-year cardiovascular risks based on the consumption of foods from quality-focused food categories, rather than detailed serving sizes. No studies have examined how mobile health (mHealth) apps focusing on diet quality can bring promising results in health outcomes and ease of adoption. Objective: This study aims to design a mobile app to support the HHS-informed quality-focused dietary approach by enabling users to log simplified diet quality and view its real-time impact on future heart disease risks. Users were asked to log food categories that are the main predictors of the HHS. We measured the app’s feasibility and efficacy in improving individuals’ clinical and behavioral factors that affect future heart disease risks and app use. Methods: We recruited 38 participants who were overweight or obese with high heart disease risk and who used the app for 5 weeks and measured weight, blood sugar, blood pressure, HHS, and diet score (DS)—the measurement for diet quality—at baseline and week 5 of the intervention. Results: Most participants (30/38, 79%) used the app every week and showed significant improvements in DS (baseline: mean 1.31, SD 1.14; week 5: mean 2.36, SD 2.48; 2-tailed t test t29=−2.85; P=.008) and HHS (baseline: mean 22.94, SD 18.86; week 4: mean 22.15, SD 18.58; t29=2.41; P=.02) at week 5, although only 10 participants (10/38, 26%) checked their HHS risk scores more than once. Other outcomes, including weight, blood sugar, and blood pressure, did not show significant changes. Conclusions: Our study showed that our logging tool significantly improved dietary choices. Participants were not interested in seeing the HHS and perceived logging diet categories irrelevant to improving the HHS as important. We discuss the complexities of addressing health risks and quantity- versus quality-based health monitoring and incorporating secondary behavior change goals that matter to users when designing mHealth apps.}, author = {B C Kwon and C VanDam and S E Chiuve and H W Choi and P Entler and P.-N. Tan and J Huh-Yoo}, doi = {10.2196/21733}, issn = {22915222}, issue = {12}, journal = {JMIR mHealth and uHealth}, keywords = {CVD,Diet monitoring,Diet tracking,Food tracking,Health risk communication,Heart disease risk,Human-computer interaction,MHealth,Mobile phone,User study}, title = {Improving heart disease risk through quality-focused diet logging: Pre-post study of a diet quality tracking app}, volume = {8}, year = {2020}, }
@article{schneider_designing_2019, title = {Designing for empowerment – {An} investigation and critical reflection}, volume = {61}, issn = {2196-7032}, url = {https://www.degruyter.com/document/doi/10.1515/itit-2018-0036/html}, doi = {10.1515/itit-2018-0036}, abstract = {Technology bears the potential to empower people – to help them tackle challenges they would otherwise give up on or not even try, to make experiences possible that they did not have access to before. One type of such technologies – the application area of the thesis presented here – is health and wellbeing technology (HWT), such as digital health records, physical activity trackers, or digital fitness coach applications. Researchers and companies alike often claim that HWTs empower people to live healthier and happier lives. However, there is reason to challenge and critically reflect on these claims and underlying assumptions as more and more researchers are finding that technologies described as empowering turn out to be “disempowering”. This critical reflection is the starting point of the thesis presented here: Can HWTs really empower people in their everyday lives? If so, how can we design for empowerment? In my cumulative dissertation, I combine studies on existing HWTs, such as patient-controlled electronic health records and personalized mobile fitness coaches with the development of novel prototypes such as transparent digital fitness coaches that communicate their rationale to the user. By reflecting on these case studies, I come to revisit the sometimes washed-out meaning of “empowerment” in “empowering technologies”; I introduce a framework to establish conceptual clarity; and I suggest three principles to design for empowerment based on my own work and the Capability Approach by Sen and Nussbaum that aim to inform and inspire research on HWTs and beyond.}, language = {en}, number = {1}, urldate = {2022-03-16}, journal = {it - Information Technology}, author = {Schneider, Hanna}, month = feb, year = {2019}, note = {Publisher: De Gruyter Oldenbourg}, keywords = {Empowerment, health and wellbeing technology, human-computer interaction}, pages = {59--65}, }
@inproceedings{rutta_comic-based_2019, address = {New York}, title = {Comic-based {Digital} {Storytelling} for {Self}-expression: an {Exploratory} {Case}-{Study} with {Migrants}}, isbn = {978-1-4503-7162-9}, shorttitle = {Comic-based {Digital} {Storytelling} for {Self}-expression}, url = {https://dl.acm.org/doi/pdf/10.1145/3328320.3328400}, doi = {10.1145/3328320.3328400}, abstract = {In this paper, we report on an exploratory case study investigation of a digital storytelling intervention conducted with young adult male migrants in a reception center in Italy. The investigation explores how comic-based storytelling supported by a digital tool, named Communics, can facilitate migrants in producing narratives for self-expression and support them in reflecting on real-life examples of discrimination. In a first phase, focus groups with NGO operators were organized for negotiating the goals and the boundaries of the intervention. Then, semi-structured interviews with migrants were conducted to define both graphical and textual content on which to base the storytelling activity. In a second phase, we conducted a pilot study with young adult male migrants aimed at investigating the storytelling experience as a reflective process. The empirical findings provide evidence that digital storytelling based on comics can facilitate the narration practice and indications for future development of Communics. We also describe the main challenges experienced in undertaking fieldwork and in involving the migrant community.}, language = {English}, publisher = {Assoc Computing Machinery}, author = {Rutta, Carolina Beniamina and Schiavo, Gianluca and Zancanaro, Massimo}, editor = {Cech, F. and Tellioglu, H.}, year = {2019}, note = {Publication Title: 9th International Conference on Communities \& Technologies (c\&t) WOS:000482174900003}, keywords = {Case study, Counterstory, Digital Storytelling, Human-computer interaction, Migrants, Self-expression}, pages = {9--13}, }
@book{ title = {Framework for Peer-to-Peer Data Sharing over Web Browsers}, type = {book}, year = {2019}, source = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}, identifiers = {[object Object]}, keywords = {Data ownership,Decentralization,Human-computer interaction,Peer-to-peer,Security,Social web,Web apps,WebRTC}, volume = {11814 LNCS}, id = {75e74921-2642-3659-bc33-e2a07a701fd4}, created = {2020-01-02T23:59:00.000Z}, file_attached = {false}, profile_id = {99050e42-27eb-3a5f-8575-b0eafc224a58}, last_modified = {2021-03-02T14:02:47.662Z}, read = {false}, starred = {false}, authored = {true}, confirmed = {false}, hidden = {false}, private_publication = {true}, abstract = {© 2019, Springer Nature Switzerland AG. The Web was originally designed to be a decentralized environment where everybody could share a common information space to communicate and share information. However, over the last decade, the Web has become increasingly centralized. This has led to serious concerns about data ownership and misuse of personal data. While there are several approaches to solve these problems, none of them provides a simple and extendable solution. To this end, in this paper, we present an application-independent, browser-based framework for sharing data between applications over peer-to-peer networks. The framework aims to empower end-users with complete data ownership, by allowing them to store shareable web content locally, and by enabling content sharing without the risk of data theft or monitoring. We present the functional requirements, implementation details, security aspects, and limitations of the proposed framework. And finally, discuss the challenges that we encountered while designing the framework; especially, why it is difficult to create a server-less application for the Web.}, bibtype = {book}, author = {Pattanaik, V. and Sharvadze, I. and Draheim, D.} }
@inproceedings{ title = {Crowdsourcing a Self-Evolving Dialog Graph}, type = {inproceedings}, year = {2019}, keywords = {crowdsourcing,datasets,dialog systems,human-computer interaction}, websites = {https://doi.org/10.1145/3342775.3342790}, publisher = {Association for Computing Machinery}, city = {New York, NY, USA}, series = {CUI ’19}, id = {3988d024-6a60-3c08-8e62-8bc506eb1c2a}, created = {2020-01-06T19:15:01.778Z}, file_attached = {false}, profile_id = {fb8d345a-1d79-3791-a6c6-00233ea44521}, last_modified = {2020-01-06T19:15:01.778Z}, read = {false}, starred = {false}, authored = {true}, confirmed = {true}, hidden = {false}, citation_key = {10.1145/3342775.3342790}, source_type = {inproceedings}, private_publication = {false}, bibtype = {inproceedings}, author = {Jonell, Patrik and Fallgren, Per and Doundefinedan, Fethiye Irmak and Lopes, José and Wennberg, Ulme and Skantze, Gabriel}, doi = {10.1145/3342775.3342790}, booktitle = {Proceedings of the 1st International Conference on Conversational User Interfaces} }
@article{crompton_post_2016, title = {To post, or not to post, that is the question: {Teachers} candidates' social networking decisions and professional development needs}, volume = {24}, doi = {10.1016/j.chb.2013.01.049}, number = {3}, journal = {Journal of Technology and Teacher Education}, author = {Crompton, Helen and Rippard, Kelly and Sommerfeldt, Jody}, year = {2016}, keywords = {Arts and Humanities (miscellaneous), Facebook, General Psychology, Human-Computer Interaction, Instagram, Snapchat, Twitter, dpd, preservice teachers, privacy, social media, teacher professional development}, pages = {257--279} }
@Article{Leftheriotis_2016, author = {Leftheriotis, Ioannis and Chorianopoulos, Konstantinos and Jaccheri, Letizia}, title = {{Design and implement chords and personal windows for multi-user collaboration on a large multi-touch vertical display}}, journal = {Human-centric Computing and Information Sciences}, year = {2016}, volume = {6}, number = {1}, pages = {14}, month = {dec}, abstract = {Co-located collaboration on large vertical screens has become technically feasible, but users are faced with increased effort, or have to wear intrusive personal identifiers. Previous research on co-located collaboration has assumed that all users perform exactly the same task (e.g., moving and resizing photos), or that they negotiate individual actions in turns. However, there is limited user interface software that supports simultaneous performance of individual actions during shared tasks (Fig. 1a). As a remedy, we have introduced multi-touch chords (Fig. 1b) and personal action windows (Fig. 1c) for co-located collaboration on a large multi-touch vertical display. Instead of selecting an item in a fixed menu by reaching for it, users work simultaneously on shared tasks by means of personal action windows, which are triggered by multi-touch chords performed anywhere on the display. In order to evaluate the proposed technique with users, we introduced an experimental task, which stands for the group dynamics that emerge during shared tasks on a large display. A grounded theory analysis of users' behaviour provided insights into established co-located collaboration topics, such as conflict resolution strategies and space negotiation. The main contribution of this work is the design and implementation of a novel seamless identification and interaction technique that supports diverse multi-touch interactions by multiple users: multi-touch chord interaction along with personal action windows.}, doi = {10.1186/s13673-016-0070-5}, url_Paper={Leftheriotis_2016.pdf}, issn = {2192-1962}, keywords = {Chords,Collaboration,Multi-touch,Multi-user,Personal windows,collaboration,human-computer interaction,large screen,software technology,surface,ubiquitous computing}, mendeley-tags = {collaboration,human-computer interaction,software technology,surface,ubiquitous computing}, publisher = {Springer Berlin Heidelberg}, url = {http://hcis-journal.springeropen.com/articles/10.1186/s13673-016-0070-5}, }
@article{ title = {A User-Developed 3-D Hand Gesture Set for Human–Computer Interaction}, type = {article}, year = {2015}, keywords = {HCI,fatigue,gesture,human-computer interaction,usability}, pages = {607-621}, volume = {57}, websites = {http://hfs.sagepub.com/content/early/2014/11/21/0018720814559307.abstract,http://journals.sagepub.com/doi/10.1177/0018720814559307}, month = {6}, day = {24}, id = {f0debf55-2777-37b1-ad42-492734475ba7}, created = {2021-06-04T19:36:50.811Z}, accessed = {2015-05-28}, file_attached = {false}, profile_id = {f6c02e5e-2d2f-3786-8fa8-871d32fc2b9b}, last_modified = {2021-06-17T19:13:08.239Z}, read = {false}, starred = {false}, authored = {true}, confirmed = {true}, hidden = {false}, citation_key = {Pereira2015a}, private_publication = {false}, abstract = {Objective: The purpose of this study was to develop a lexicon for 3-D hand gestures for common humancomputer interaction (HCI) tasks by considering usability and effort ratings. Background: Recent technologies create an opportunity for developing a free-form 3-D hand gesture lexicon for HCI. Method: Subjects (N = 30) with prior experience using 2-D gestures on touch screens performed 3-D gestures of their choice for 34 common HCI tasks and rated their gestures on preference, match, ease, and effort. Videos of the 1,300 generated gestures were analyzed for gesture popularity, order, and response times. Gesture hand postures were rated by the authors on biomechanical risk and fatigue. Results: A final task gesture set is proposed based primarily on subjective ratings and hand posture risk. The different dimensions used for evaluating task gestures were not highly correlated and, therefore, measured different properties of the taskgesture match. Application: A method is proposed for generating a user-developed 3-D gesture lexicon for common HCIs that involves subjective ratings and a posture risk rating for minimizing arm and hand fatigue.}, bibtype = {article}, author = {Pereira, Anna and Wachs, Juan P. and Park, Kunwoo and Rempel, David}, doi = {10.1177/0018720814559307}, journal = {Human Factors: The Journal of the Human Factors and Ergonomics Society}, number = {4} }
@inproceedings{Bozkurt2015a, author = {Bozkurt, Elif and Khaki, Hossein and Kececi, Sinan and Turker, B Berker and Yemez, Yucel and Erzin, Engin}, booktitle = {SIU: Sinyal Isleme ve Iletisim Uygulamalari Kurultayi}, file = {:Users/eerzin/Dropbox/Docs/Mendeley/2015/Bozkurt et al/2015 - Bozkurt et al. - JESTKOD VeritabaniIkili Iletisim Analizi.pdf:pdf}, isbn = {9781479948741}, keywords = {affective state tracking,gesticulation,human-computer interaction,speech,virtual character animation}, language = {Turkish}, title = {{JESTKOD Veritabani:Ikili Iletisim Analizi}}, year = {2015} }
@article{ title = {User study and integration of assistive technologies for people with cognitive disabilities in their daily life activities}, type = {article}, year = {2015}, identifiers = {[object Object]}, keywords = {Assistive technologies,cognitive disabilities,human-computer interaction,mobile computing}, pages = {389-390}, volume = {7}, id = {a671b3d2-a324-3c59-b0c3-9650dce88182}, created = {2016-05-17T18:08:06.000Z}, file_attached = {false}, profile_id = {45906878-3606-39f3-8ac1-2063c7c2418c}, last_modified = {2017-03-25T13:25:05.724Z}, read = {false}, starred = {false}, authored = {true}, confirmed = {false}, hidden = {false}, private_publication = {false}, abstract = {© 2015 - IOS Press and the authors. All rights reserved. The present article summarizes the doctoral dissertation of Javier Gomez.}, bibtype = {article}, author = {Gomez, J. and Montoro, G.}, journal = {Journal of Ambient Intelligence and Smart Environments}, number = {3} }
@article{sheldon_understanding_2015, title = {Understanding students reasons and gender differences in adding faculty as {Facebook} friends}, volume = {53}, issn = {0747-5632}, doi = {10.1016/j.chb.2015.06.043}, journal = {Computers in Human Behavior}, author = {Sheldon, Pavica}, year = {2015}, keywords = {Arts and Humanities (miscellaneous), General Psychology, Human-Computer Interaction, social media, student-faculty interaction, student-teacher relationship}, pages = {58--62} }
@article{scherer_revisiting_2015, title = {Revisiting teachers computer self-efficacy: {A} differentiated view on gender differences}, volume = {53}, issn = {0747-5632}, doi = {10.1016/j.chb.2015.06.038}, journal = {Computers in Human Behavior}, author = {Scherer, Ronny and Siddiq, Fazilat}, year = {2015}, keywords = {Competencia Digital, Human-Computer Interaction, Psychology(all), autopercepción, competencia digital docente, digital literacy, self-assessment, self-efficacy, self-perception}, pages = {48--57} }
@InProceedings{vns-mlgd-2015, Title = {The Application of Machine Learning to Problems in Graph Drawing -- A Literature Review}, Author = {Raissa dos Santos Vieira and Hugo Alexandre Dantas do Nascimento and Wanderson Barcelos da Silva}, Booktitle = {Proc. of The Seventh International Conference on Information, Process, and Knowledge Management (eKNOW 2015)}, Year = {2015}, Pages = {112--118}, Publisher = {IARIA}, Series = {eKNOW 2015}, Abstract = {Graph drawing, as a research field, is concerned with the visualization of information modeled in the form of graphs. The present paper is a literature review that identifies the state- of-the-art in applying machine learning techniques to problems in graph drawing. We focused on machine learning strategies that build up and represent knowledge about how to draw a graph. Surprisingly, only a few pieces of research can be found about this subject. We classified them in two main groups: the ones that extract knowledge from the user by human-computer interaction and those that are not based on data directly gathered from users. The study of these methods shows that there is still much to research and to develop regarding the application of machine learning to graph drawing. We suggest directions for future research on this area.}, Keywords = {Graph Drawing, Human-Computer Interaction, Machine Learning}, Owner = {hugo}, Qualis = {B4}, Timestamp = {2015.11.15}, Url = {http://www.thinkmind.org/index.php?view=instance&instance=eKNOW+2015 http://www.thinkmind.org/index.php?view=article&articleid=eknow_2015_5_30_60124} }
@inProceedings{ title = {JESTKOD Veritabanı:İkili İletişim Analizi}, type = {inProceedings}, year = {2015}, identifiers = {[object Object]}, keywords = {SIU,affective state tracking,gesticulation,human-computer interaction,speech,virtual character animation}, id = {b72b483b-a04e-3c63-961c-20c15a89c6c6}, created = {2015-04-03T07:40:31.000Z}, file_attached = {true}, profile_id = {a66836e9-8248-3ca5-ad1d-8f3bf4a497e4}, last_modified = {2015-12-11T20:36:34.000Z}, read = {false}, starred = {false}, authored = {true}, confirmed = {true}, hidden = {false}, citation_key = {Bozkurt2015a}, language = {Turkish}, bibtype = {inProceedings}, author = {Bozkurt, Elif and Khaki, Hossein and Kececi, Sinan and Turker, B Berker and Yemez, Yucel and Erzin, Engin}, booktitle = {SIU: Sinyal İşleme ve İletişim Uygulamaları Kurultayı} }
@article{ title = {A new tool for the automatic detection of muscular voluntary contractions in the analysis of electromyographic signals}, type = {article}, year = {2015}, identifiers = {[object Object]}, keywords = {Activation Detection,Electromyography,Human-computer interaction,Interactive Tool,Muscular voluntary contraction,Signal Processing}, pages = {492-499}, volume = {27}, id = {4eab4611-b095-3dd0-ba9b-44ac43f0bc00}, created = {2017-07-05T10:00:16.714Z}, file_attached = {false}, profile_id = {b3841947-0686-350c-9ea2-f8dfdd701b6e}, group_id = {ddda9c0d-ed30-3830-aac0-d912e80555a5}, last_modified = {2017-07-05T10:00:20.641Z}, read = {false}, starred = {false}, authored = {false}, confirmed = {true}, hidden = {false}, folder_uuids = {e48886ef-4439-4536-a2af-81f2bb4d1895}, abstract = {Electromyographic (EMG) signals play a key role in many clinical and\nbiomedical applications. They can be used for identifying patients with\nmuscular disabilities, assessing lower-back pain, kinesiology and motor\ncontrol. There are three common applications of the EMG signal: (1) to\ndetermine the activation timing of the muscle; (2) to estimate the force\nproduced by the muscle and (3) to analyze muscular fatigue through\nanalysis of the frequency spectrum of the signal. We have developed an\nEMG tool that was incorporated in an existing web-based biosignal\nacquisition and processing framework. This tool can be used on a\npost-processing environment and provides not only frequency and time\nparameters, but also an automatic detection of starting and ending times\nfor muscular voluntary contractions using a threshold-based algorithm\nwith the inclusion of the Teager-Kaiser energy operator. The algorithm\nfor the muscular voluntary contraction detection can also be reported\nafter a real-time acquisition, in order to discard possible outliers and\nsimultaneously compare activation times in different muscles. This tool\ncovers all known applications and allows a careful and detailed analysis\nof the EMG signal for both clinicians and researchers. The detection\nalgorithm works without user interference and is also user-independent.\nIt manages to detect muscular activations in an interactive process. The\nuser simply has to select the signal's time interval as input, and the\noutcomes are provided afterwards.}, bibtype = {article}, author = {Pimentel, Angela and Gomes, Ricardo and Olstad, Bj??rn Harald and Gamboa, Hugo}, journal = {Interacting with Computers}, number = {5} }
@inProceedings{ title = {JESTKOD Veritabanı:İkili İletişim Analizi}, type = {inProceedings}, year = {2015}, identifiers = {[object Object]}, keywords = {SIU,affective state tracking,gesticulation,human-computer interaction,speech,virtual character animation}, id = {b72b483b-a04e-3c63-961c-20c15a89c6c6}, created = {2015-04-03T07:40:31.000Z}, file_attached = {true}, profile_id = {a66836e9-8248-3ca5-ad1d-8f3bf4a497e4}, last_modified = {2015-12-10T08:05:44.000Z}, read = {false}, starred = {false}, authored = {true}, confirmed = {true}, hidden = {false}, citation_key = {Bozkurt2015a}, language = {Turkish}, bibtype = {inProceedings}, author = {Bozkurt, Elif and Khaki, Hossein and Keçeci, Sinan and Türker, B Berker and Yemez, Yücel and Erzin, Engin}, booktitle = {SIU: Sinyal İşleme ve İletişim Uygulamaları Kurultayı} }
@article{mao_social_2014, title = {Social media for learning: {A} mixed methods study on high school students technology affordances and perspectives}, volume = {33}, issn = {0747-5632}, doi = {10.1016/j.chb.2014.01.002}, journal = {Computers in Human Behavior}, author = {Mao, Jin}, year = {2014}, keywords = {Arts and Humanities (miscellaneous), General Psychology, Human-Computer Interaction, affordances, benefits, social media}, pages = {213--223} }
@inproceedings{parker2014data, title={Data visualisation trends in mobile augmented reality applications}, author={Parker, Callum and Tomitsch, Martin}, booktitle={Proceedings of the 7th International Symposium on Visual Information Communication and Interaction}, pages={228}, year={2014}, organization={ACM} }
@book{laurel_computers_2014, address = {Upper Saddle River, NJ}, edition = {Second edition}, title = {Computers as theatre}, isbn = {978-0-321-91862-8}, publisher = {Addison-Wesley}, author = {Laurel, Brenda}, year = {2014}, note = {03491}, keywords = {Human-computer interaction, User interfaces (Computer systems), engagement} }
@Article{Avlonitis_2014, author = {Avlonitis, Markos and Chorianopoulos, Konstantinos}, title = {{Video Pulses: User-based modeling of interesting video segments}}, journal = {Advances in Multimedia}, year = {2014}, pages = {1--9}, abstract = {We present a user-based method that detects regions of interest within a video, in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the Web.}, doi = {10.1155/2014/712589}, url_Paper={Avlonitis_2014.pdf}, keywords = {analytics,human-computer interaction,implicit,information retrieval,interaction,multimedia,region of interest,semantics,signal processing,time-series,user modeling,video,video lecture}, mendeley-tags = {analytics,human-computer interaction,information retrieval,multimedia,semantics,signal processing,time-series,video lecture}, url = {http://www.hindawi.com/journals/am/2014/712589/}, }
@inproceedings{carter_paradigms_2014, address = {New York, NY, USA}, series = {{CHI} {PLAY} '14}, title = {Paradigms of games research in {HCI}: a review of 10 years of research at {CHI}}, isbn = {978-1-4503-3014-5}, shorttitle = {Paradigms of games research in {HCI}}, url = {https://doi.org/10.1145/2658537.2658708}, doi = {10.1145/2658537.2658708}, abstract = {In this paper we argue that games and play research in the field of Human-Computer Interaction can usefully be understood as existing within 4 distinct research paradigms. We provide our rationale for developing these paradigms and discuss their significance in the context of the inaugural CHI Play conference.}, urldate = {2023-03-01}, booktitle = {Proceedings of the first {ACM} {SIGCHI} annual symposium on {Computer}-human interaction in play}, publisher = {Association for Computing Machinery}, author = {Carter, Marcus and Downs, John and Nansen, Bjorn and Harrop, Mitchell and Gibbs, Martin}, month = oct, year = {2014}, keywords = {game studies, human-computer interaction, paradigms}, pages = {27--36}, }
@Article{Gkonela_2014, author = {Gkonela, Chrysoula and Chorianopoulos, Konstantinos}, title = {{VideoSkip: event detection in social web videos with an implicit user heuristic}}, journal = {Multimedia Tools and Applications}, year = {2014}, volume = {69}, number = {2}, pages = {383--396}, month = {feb}, abstract = {In this paper, we present a user-based event detection method for social web videos. Previous research in event detection has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the contents of a video. There are few user-centric approaches that have considered either search keywords, or external data such as comments, tags, and annotations. Moreover, some of the user-centric approaches imposed an extra effort to the users in order to capture required information. In this research, we are describing a method for the analysis of implicit users' interactions with a web video player, such as pause, play, and thirty-seconds skip or rewind. The results of our experiments indicated that even the simple user heuristic of local maxima might effectively detect the same video-events, as indicated manually. Notably, the proposed technique was more accurate in the detection of events that have a short duration, because those events motivated increased user interaction in video hot-spots. The findings of this research provide evidence that we might be able to infer semantics about a piece of unstructured data just from the way people actually use it.}, doi = {10.1007/s11042-012-1016-1}, url_Paper={Gkonela_2014.pdf}, issn = {1380-7501}, keywords = {Event detection,Experiment,Semantics,User-based,Video,Video lectures,Web,analytics,human-computer interaction,information retrieval,interaction,multimedia,social media,software technology,time-series}, mendeley-tags = {Video lectures,analytics,human-computer interaction,information retrieval,interaction,multimedia,social media,software technology,time-series}, publisher = {Springer Netherlands}, url = {http://www.springerlink.com/content/c1m1565463117216/}, }
@InProceedings{Armeni_2013, author = {Armeni, Iro and Chorianopoulos, Konstantinos}, title = {{Pedestrian navigation and shortest path: Preference versus distance}}, booktitle = {Workshop Proceedings of the 9th International Conference on Intelligent Environments IE'13, July 16-19, 2013, Athens, Greece}, year = {2013}, pages = {647--652}, publisher = {IOS}, abstract = {Contemporary digital maps provide an option for pedestrian navigation, but they do not account for subjective preferences in the calculation of the shortest path, which is usually provided in terms of absolute distance. For this purpose, we performed a controlled experiment with local pedestrians, who were asked to navigate from point A to point B in a fast manner. The pedestrians' routes were recorded by means of a GPS device and then plotted on a map for comparison with suggested itinerary from a digital map. We found that the preferred shortest path is significantly different to the suggested one. Notably, the preffered paths were slightly longer than the suggested, but there was no effect in the trip duration because there were fewer obstacles, such as cars. Since many pedestrians employ GPS enabled devices, the findings of this research inform the development of mobile applications and the design of new subjective map layers for city dwellers.}, doi = {10.3233/978-1-61499-286-8-647}, url_Paper={Armeni_2013.pdf}, keywords = {cartography,citizen science,collective,community,experiment,gps,grass roots,human-computer interaction,map,multimedia,participatory,pedestrian,preference,route,shortest path,trajectory,ubiquitous computing,well-being}, mendeley-tags = {cartography,citizen science,community,grass roots,human-computer interaction,multimedia,participatory,route,trajectory,ubiquitous computing,well-being}, url = {http://ebooks.iospress.nl/publication/33920}, }
@InCollection{Chorianopoulos_2013a, author = {Chorianopoulos, Konstantinos and Shamma, David Ayman and Kennedy, Lyndon}, title = {{Social Video Retrieval: Research Methods in Controlling, Sharing, and Editing of Web Video}}, booktitle = {Social Media Retrieval}, publisher = {Springer}, year = {2013}, editor = {Ramzan, Naeem and van Zwol, Roelof and Lee, Jong-Seok and Cl{\"{u}}ver, Kai and Hua, Xian-Sheng}, pages = {3--22}, abstract = {Content-based video retrieval has been a very efficient technique with new video content, but it has not regarded the increasingly dynamic interactions between users and content. We present a comprehensive survey on user-based techniques and instrumentation for social video retrieval researchers. Community-based approaches suggest there is much to learn about an unstructured video just by analyzing the dynamics of how it is being used. In particular, we explore three pillars of online user activity with video content: 1) Seeking patterns within a video is linked to interesting video segments, 2) Sharing patterns between users indicate that there is a correlation between social activity and popularity of a video, and 3) Editing of live events is automated through the synchronization of audio across multiple viewpoints of the same event. Moreover, we present three complementary research methods in social video retrieval: Experimental replication of user activity data and signal analysis, data mining and prediction on natural user activity data, and hybrid techniques that combine robust content-based approaches with crowd sourcing of user gener- ated content. Finally, we suggest further research directions in the combination of richer user- and content-modeling, because it provides an attractive solution to the personalization, navigation, and social consumption of videos.}, url_Paper={Chorianopoulos_2013a.pdf}, isbn = {978-1-4471-4555-4}, keywords = {human-computer interaction,information retrieval,media technology,methodology,multimedia,survey,video}, mendeley-tags = {human-computer interaction,information retrieval,media technology,methodology,multimedia,survey,video}, url = {http://link.springer.com/chapter/10.1007/978-1-4471-4555-4{\_}1}, doi = {10.1007/978-1-4471-4555-4_1}, }
@article{lantz-andersson_crossing_2013, title = {Crossing boundaries in {Facebook}: {Students} framing of language learning activities as extended spaces}, volume = {8}, issn = {1556-1607}, doi = {10.1007/s11412-013-9177-0}, number = {3}, journal = {International Journal of Computer-Supported Collaborative Learning}, author = {Lantz-Andersson, Annika and Vigmo, Sylvi and Bowen, Rhonwen}, year = {2013}, keywords = {Education, Human-Computer Interaction, social media}, pages = {293--312} }
@article{junco_inequalities_2013, title = {Inequalities in {Facebook} use}, volume = {29}, issn = {0747-5632}, doi = {10.1016/j.chb.2013.05.005}, number = {6}, journal = {Computers in Human Behavior}, author = {Junco, Reynol}, year = {2013}, keywords = {Arts and Humanities (miscellaneous), Facebook, General Psychology, Human-Computer Interaction, social media}, pages = {2328--2336} }
@book{brabham_crowdsourcing_2013, address = {Cambridge, MA}, series = {The {MIT} {Press} essential knowledge series}, title = {Crowdsourcing}, isbn = {978-0-262-51847-5}, publisher = {The MIT Press}, author = {Brabham, Daren C.}, year = {2013}, keywords = {Crowdsourcing, Human-computer interaction}, }
@inProceedings{ title = {Domain-specific languages for agile urban policy modelling}, type = {inProceedings}, year = {2013}, identifiers = {[object Object]}, keywords = {Domain-specific languages,Human-computer interaction,Smart cities,Urban planning,Urban policy modelling}, id = {c95a3a8b-2771-3bde-81bb-a4933a04b581}, created = {2017-12-26T09:58:33.805Z}, file_attached = {false}, profile_id = {37b62785-c736-3e4c-9cd5-ded1c9f91b27}, last_modified = {2017-12-26T09:58:33.805Z}, read = {false}, starred = {false}, authored = {true}, confirmed = {false}, hidden = {false}, private_publication = {false}, abstract = {In this paper we present a new approach of performing urban policy modelling and making with the help of ICT enabled tools. We present a complete policy cycle that includes creating policy plans, securing stakeholders and public engagement, implementation, monitoring, and evaluating a particular policy model. ICT enabled tools can be deployed at various stages in this cycle, but they require an intuitive interface which can be supported by domainspecific languages (DSLs) as the means to express policy modelling aspects such as computational processes and computer-readable policy rules in the words of the domain expert. In order to evaluate the use of such languages, we present a real-world scenario from the urbanAPI project. We describe how DSLs for this scenario would look like. Finally, we discuss strengths and limitations of our approach as well as lessons learnt. © ECMS Webjørn Rekdalsbakken.}, bibtype = {inProceedings}, author = {Krämer, M. and Ludlow, D. and Khan, Z.}, booktitle = {Proceedings - 27th European Conference on Modelling and Simulation, ECMS 2013} }
@article{tess_role_2013, title = {The role of social media in higher education classes (real and virtual) {A} literature review}, volume = {29}, issn = {0747-5632}, doi = {10.1016/j.chb.2012.12.032}, number = {5}, journal = {Computers in Human Behavior}, author = {Tess, Paul A.}, year = {2013}, keywords = {Human-Computer Interaction, Psychology(all), facebook, social media, twitter}, pages = {A60--A68} }
@article{ title = {A gesture-driven computer interface using Kinect}, type = {article}, year = {2012}, identifiers = {[object Object]}, keywords = {Cameras,Covariance matrix,Euclidean distance metric,Gesture recognition,Human action recognition,Human-computer interaction,Humans,Joints,Kinect SDK,Kinect camera,Real time systems,Vectors,automatic human action recognition,close-range gesture recognition,covariance analysis,feature covariances,feature vectors,gesture-driven computer interface,hand gesture recognition,human computer interaction,infrared imaging,log-Euclidean metric,nearest-neighbor classification,pattern classification,skeleton model,storage temporal,temporal misalignment,video signal processing}, pages = {185-188}, websites = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6202484}, id = {7cf2292a-0e8f-3b19-b128-fad72264a7be}, created = {2017-01-13T10:27:35.000Z}, file_attached = {true}, profile_id = {5d07ba72-227e-314d-9d79-37271da759ee}, group_id = {e79131d5-b618-3b3c-ae97-e4263040fd28}, last_modified = {2017-03-14T16:56:30.626Z}, read = {false}, starred = {false}, authored = {false}, confirmed = {true}, hidden = {false}, abstract = {Automatic recognition of human actions from video has been studied for many years. Although still very difficult in uncontrolled scenarios, it has been successful in more restricted settings (e.g., fixed viewpoint, no occlusions) with recognition rates approaching 100%. However, the best-performing methods are complex and computationally-demanding and thus not well-suited for real-time deployments. This paper proposes to leverage the Kinect camera for close-range gesture recognition using two methods. Both methods use feature vectors that are derived from the skeleton model provided by the Kinect SDK in real-time. Although both methods perform nearest-neighbor classification, one method does this in the space of features using the Euclidean distance metric, while the other method does this in the space of feature covariances using a log-Euclidean metric. Both methods recognize 8 hand gestures in real time achieving correct-classification rates of over 99% on a dataset of 20 subjects but the method based on Euclidean distance requires feature-vector collections to be of the same size, is sensitive to temporal misalignment, and has higher computation and storage requirements.}, bibtype = {article}, author = {Lai, Kam and Konrad, Janusz and Ishwar, Prakash}, journal = {2012 IEEE Southwest Symposium on Image Analysis and Interpretation} }
@book{cunningham_experimental_2012, address = {Boca Raton, FL}, series = {An {A} {K} {Peters} book}, title = {Experimental design: from user studies to psychophysics}, isbn = {978-1-56881-468-1}, shorttitle = {Experimental design}, publisher = {CRC Press}, author = {Cunningham, Douglas W.}, collaborator = {Wallraven, Christian}, year = {2012}, keywords = {Computer science, Experimental design, Experiments, Human-computer interaction, Psychophysics} }
@article{ramirez_landmarke_2012, title = {Landmarke: {An} ad hoc deployable ubicomp infrastructure to support indoor navigation of firefighters}, volume = {16}, issn = {16174909}, doi = {10.1007/s00779-011-0462-5}, abstract = {Indoor navigation plays a central role for the safety of firefighters. The circumstances in which a firefighting intervention occurs represent a rather complex challenge for the design of supporting technology. In this paper, we present the results of our work designing an ad hoc ubicomp infrastructure to support navigation of firefighters working in structure fires inside the zone of danger. We take a wider approach, complementing the technical questions with the development of effective navigation practices based on technology available today. We provide an overview of the complete design process, from the theoretical and empirical underpinnings to the construction and evaluation of three iterations of the platform. We report the results of our evaluation and the implications and tensions uncovered in this process, and we discuss the challenges and implications of it for the design of ubicomp for firefighters.}, number = {8}, journal = {Personal and Ubiquitous Computing}, author = {Ramirez, Leonardo and Dyrks, Tobias and Gerwinski, Jan and Betz, Matthias and Scholz, Markus and Wulf, Volker}, year = {2012}, keywords = {Ad hoc deployment, Firefighting, Human-computer interaction, Indoor navigation, Landmarke, Mobile ad hoc network, Navigation, Orientation, Sensor networks, Ubiquitous computing, Wearable computing}, pages = {1025--1038}, }
@book{jacko_human-computer_2012, address = {Boca Raton, FL}, edition = {3rd ed}, series = {Human factors and ergonomics}, title = {The human-computer interaction handbook: fundamentals, evolving technologies, and emerging applications}, isbn = {978-1-4398-2943-1}, shorttitle = {The human-computer interaction handbook}, publisher = {CRC Press}, collaborator = {Jacko, Julie A.}, year = {2012}, keywords = {Human-computer interaction} }
@article{2012-12-TarVerHam, Abstract = {The World of Warcraft (WoW) (Blizzard Entertainment, 2004) massively multiplayer online role playing game (MMORPG) provides users with extensive control over its user interface (UI), which has inspired the emergence of a large community devoted to developing UI modifications (UI modding). Through investigation of the members of the community of those who design and use UI modifications for WoW, we gather information that may aid in the creation of communities dedicated to modifying the interfaces of other software packages. The goal of this paper is to study the effect that user created interfaces have had on WoW and its community of users. To achieve this goal, we issued an online survey to WoW players that investigated four aspects of the community: (R1) the backgrounds of its members, (R2) their attitudes towards modifications and the community itself, (R3) their use of UI modifications, (R4) the characteristics and motivations of users who create and share modifications. The survey results represented numerous unique viewpoints and shed light on the varied nature of the UIM community of those who design and use WoW modifications. The results suggest that the interface of a videogame is best developed in concert with its players via UI modifications because the users of the system may be the best equipped to design or customize the interface to meet their needs. Since every user may have unique ideas about the perfect interface for a software package, perhaps the only way one could ever satisfy all users is to give them the ability to create their own.}, Author = {Targett, Sean and Verlysdonk, Victoria and Hamilton, Howard J. and Hepting, Daryl H.}, Date-Added = {2016-10-19 21:53:23 +0000}, Date-Modified = {2018-09-03 17:50:55 +0000}, Journal = {Game Studies: the international journal of computer game research}, Keywords = {World of Warcraft, massively multiplayer online game, statistical survey, user interface, human-computer interaction, user interface modification, user interface add-ons, modding, UI mods}, Month = {December}, Number = {2}, Title = {A Study of User Interface Modifications in World of Warcraft}, Volume = {12}, W-Projects = {games}, W-Type = {journal}, Year = {2012}, Bdsk-Url-1 = {http://gamestudies.org/1202/articles/ui_mod_in_wow}}
@Article{Song_2012_16042, author = {Song, Y. and Demirdjian, D. and Davis, R.}, journal = {ACM Transactions on Interactive Intelligent Systems}, number = {1}, pages = {5}, publisher = {ACM}, title = {Continuous body and hand gesture recognition for natural human-computer interaction}, volume = {2}, year = {2012}, title_with_no_special_chars = {Continuous body and hand gesture recognition for natural humancomputer interaction} }
@book{alwi_investigating_2011, address = {New York}, title = {Investigating an {Online} {Museum}'s {Information} {System} {Instructional} {Design} for {Effective} {Human}-{Computer} {Interaction}}, isbn = {978-1-4419-7611-6}, abstract = {Information and communications technology (ICT) tools have completely altered the way museum curators design many of their exhibits. The literature reveals many interesting studies, which explain the unique nature and characteristics of the Web-based environment, to provide many educational advantages. As a consequence, online learning is now an important agenda for many museums. They have become learning institutions in their own right as they enhance their exhibits to leverage the opportunities offered by ICT tools; thereby providing a wider (cognitive) thinking space for their online visitors. Although the role of museums in supporting the formal education of the general population is usually associated with visits to a physical museum, the online museum environment is now playing an important part in providing more information to people, as well as further enriching their life-long learning experiences. Nevertheless not enough is known about the educational effectiveness of online-museum exhibits. This paper describes a doctoral project, underway in Australia that examines the human-computer interaction (HCI) which occurs when people access online museum exhibits.}, language = {English}, publisher = {Springer}, author = {Alwi, Asmidah and McKay, Elspeth}, editor = {Ifenthaler, D. and Isaias, P. and Spector, J. M. and Kinshuk and Sampson, D. G.}, year = {2011}, note = {WOS:000395616500002}, keywords = {Cognitive preferences, Human-computer interaction, Instructional architecture, Instructional design, Online museums, Web-based learning, individual-differences} }
@article{lee_usability_2011, title = {Usability {Design} and {Psychological} {Ownership} of a {Virtual} {World}}, volume = {28}, issn = {0742-1222}, url = {https://doi.org/10.2753/MIS0742-1222280308}, doi = {10.2753/MIS0742-1222280308}, abstract = {Virtual worlds, immersive three-dimensional virtual spaces where users interact with projected identities of other users (avatars) and objects, are becoming increasingly popular and continue to grow as highly interactive, collaborative, and commercial cyberspaces. However, extant research in this context has not paid much attention to usability design of a virtual world and corresponding effects on users' psychological desire to own and control the space and objects within it and subsequent behavior intention. In this study, we apply concepts of Web site usability and psychological ownership to develop a model that illustrates the relationships between seven usability factors (legibility, firmness, coherence, variety, mystery, classic, and expressive visual aesthetics), four antecedents of psychological ownership (cognitive appraisals, perceived control, affective appraisals, and self-investment), psychological ownership, and use intention. A cross-sectional study with 239 Second Life users was conducted. The results demonstrate that designing a usable virtual world that induces strong psychological ownership is crucial to attract users to spend more time, participate in more activities, and revisit the virtual world. This is an important finding for forward-looking e-business managers looking to invest their limited resources in designing a usable virtual world. In addition, by using our model and corresponding survey items, designers can benchmark and evaluate the usability of their current virtual worlds, compare the results to the designs of competitors, and upgrade the offerings of virtual worlds, as needed, by allocating available resources to the most influential design factors to suit their specific needs.}, number = {3}, urldate = {2022-08-05}, journal = {Journal of Management Information Systems}, author = {Lee, Younghwa and Chen, Andrew N. K.}, month = dec, year = {2011}, note = {Publisher: Routledge \_eprint: https://doi.org/10.2753/MIS0742-1222280308}, keywords = {architectural quality model, human-computer interaction, landscape preference model, psychological ownership, usability, virtual worlds}, pages = {269--308}, }
@InProceedings{Du_2011b, author = {Du, Honglu and Inkpen, Kori and Tang, John and Roseway, Asta and Hoff, Aaron and Johns, Paul and Czerwinski, Mary and Meyers, Brian and Chorianopoulos, Konstantinos and Gross, Tom and Lungstrang, Peter}, title = {{VideoPal : System Description}}, booktitle = {Adjunct Proceedings of CSCW 2011}, year = {2011}, number = {Figure 1}, pages = {1--2}, abstract = {In this paper we provide a description of VideoPal, an asynchronous video-mediated communication tool.}, url_Paper={Du_2011b.pdf}, keywords = {Asynchronous CMC,VideoPal,collaboration,human-computer interaction,media technology,software technology}, mendeley-tags = {collaboration,human-computer interaction,media technology,software technology}, }
@InProceedings{Magielse2011-P_IE, Title = {An Interdisciplinary Approach to Designing an Adaptive Lighting Environment}, Author = {Remco Magielse and Sunder Rao and Paola Jaramillo and Philip Ross and Tanir Ozcelebi and Oliver Amft}, Booktitle = {IE 2011: Proceedings of the 7th International Conference on Intelligent Environments}, Year = {2011}, Abstract = {In this paper an interdisciplinary study towards the development of adaptive lighting environments is presented, involving experts from the domains of human-system interaction, activity and context recognition, and system architecture design. The goal of this work is to explore the design options and technical challenges for adaptive lighting environments from an interdisciplinary perspective, in the domain of 'office environments'. Two significant contributions are made with regard to the implementation of adaptive lighting in office environments and an evaluation method that is in line with the interdisciplinary approach. A qualitative study was performed with experts from three different disciplines to extrapolate joint research areas for further investigation. Our main findings are described with regard to the three involved fields.}, Doi = {10.1109/IE.2011.28}, File = {Magielse2011-P_IE.pdf:Magielse2011-P_IE.pdf:PDF;http\://vimeo.com/26960033:http\://vimeo.com/26960033:URL}, Keywords = {Ambient intelligence, Context Awareness, Human-Computer Interaction, Ubiquitous Computing}, Owner = {oamft}, Timestamp = {2011/05/15} }
@Article{Amft2011-J_IEEEPervComput, Title = {Smart Energy Systems}, Author = {Oliver Amft and Richard Medland and Marcus Foth and Petromil Petkov and Joana Abreu and Francisco Câmara Pereira and Philip Johnson and Robert Brewer and James Pierce and Eric Paulos}, Journal = {IEEE Pervasive Computing}, Year = {2011}, Month = {January--March}, Note = {Works-in-progress}, Number = {1}, Pages = {63--65}, Volume = {10}, Abstract = {Six works in progress look at various projects to differentiate energy-use patterns and otherwise refine options for managing energy systems intelligently and autonomously. This department is part of a special issue on smart energy systems.}, Doi = {10.1109/MPRV.2011.10}, File = {Amft2011-J_IEEEPervComput.pdf:Amft2011-J_IEEEPervComput.pdf:PDF}, Keywords = {human-computer interaction, environmental system engineering}, Owner = {oamft}, Timestamp = {2011/02/21} }
@inproceedings{Sarvadevabhatla:2011:AFE:2070481.2070488, author = {Sarvadevabhatla, Ravi Kiran and Benovoy, Mitchel and Musallam, Sam and Ng-Thow-Hing, Victor}, title = {Adaptive facial expression recognition using inter-modal top-down context}, booktitle = {Proceedings of the 13th international conference on multimodal interfaces}, series = {ICMI '11}, year = {2011}, isbn = {978-1-4503-0641-6}, location = {Alicante, Spain}, pages = {27--34}, numpages = {8}, url = {http://doi.acm.org/10.1145/2070481.2070488}, doi = {10.1145/2070481.2070488}, acmid = {2070488}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {context, facial expression recognition, human-computer interaction, mask, multi-modal}, URL = {http://npl.mcgill.ca/Papers/Adaptive Facial Expression Recognition Using Inter-modal top-down context.pdf}, abstract = {The role of context in recognizing a person's affect is being increasingly studied. In particular, context arising from the presence of multi-modal information such as faces, speech and head pose has been used in recent studies to recognize facial expressions. In most approaches, the modalities are independently considered and the effect of one modality on the other, which we call inter-modal influence (e.g. speech or head pose modifying the facial appearance) is not modeled. In this paper, we describe a system that utilizes context from the presence of such inter-modal influences to recognize facial expressions. To do so, we use 2-D contextual masks which are activated within the facial expression recognition pipeline depending on the prevailing context. We also describe a framework called the Context Engine. The Context Engine offers a scalable mechanism for extending the current system to address additional modes of context that may arise during human-machine interactions. Results on standard data sets demonstrate the utility of modeling inter-modal contextual effects in recognizing facial expressions.}, }
@InProceedings{Du_2011, author = {Du, Honglu and Inkpen, Kori and Tang, John and Roseway, Asta and Hoff, Aaron and Johns, Paul and Czerwinski, Mary and Meyers, Brian and Chorianopoulos, Konstantinos and Gross, Tom and Lungstrang, Peter}, title = {{VideoPal : An Asynchronous Video Based Communication System to Connect Children from US and Greece}}, booktitle = {Adjunct Proceedings of CSCW 2011}, year = {2011}, abstract = {In this paper we describe VideoPal, a novel video based asynchronous communication system. VideoPal is currently being used by approximately 30 4th and 5th grade students from the US and Greece to explore the opportunities and challenges of video-mediated asynchronous communication in supporting traditional Pen Pal activities.}, url_Paper={Du_2011.pdf}, keywords = {Children,Education,Pen Pal,Video,collaboration,computer education,human-computer interaction,media technology,software technology,synchronous CMC}, mendeley-tags = {collaboration,computer education,human-computer interaction,media technology,software technology}, }
@article{lampe_student_2011, title = {Student use of {Facebook} for organizing collaborative classroom activities}, volume = {6}, issn = {1556-1607}, doi = {10.1007/s11412-011-9115-y}, number = {3}, journal = {International Journal of Computer-Supported Collaborative Learning}, author = {Lampe, Cliff and Wohn, Donghee Yvette and Vitak, Jessica and Ellison, Nicole B. and Wash, Rick}, year = {2011}, keywords = {Education, Human-Computer Interaction, social media}, pages = {329--347} }
@article{hollender_integrating_2010, title = {Integrating cognitive load theory and concepts of human-computer interaction}, volume = {26}, url = {http://www.sciencedirect.com/science/article/B6VDC-50DNKS7-1/2/634a2f318d8f956aefcb5870780cc12e}, number = {6}, journal = {Computers in Human Behavior}, author = {Hollender, Nina and Hofmann, Cristian and Deneke, Michael and Schmitz, Bernhard}, month = nov, year = {2010}, keywords = {Cognitive load theory, Computer assisted instruction, Human-computer interaction, Learning}, pages = {1278--1288} }
@misc{ title = {Embedded Interaction: Interacting with the Internet of Things}, type = {misc}, year = {2010}, source = {Internet Computing, IEEE}, identifiers = {[object Object]}, keywords = {HCI,Internet,Internet of Things,digital functionality,embedded interaction,human computer interaction,human-computer interaction,information interfaces and representation,information technology and systems,interaction styles,interactive systems,mobile applications,pervasive computing,user interfaces,user/machine systems}, pages = {46-53}, volume = {14}, issue = {2}, id = {73675905-c2a7-3c39-b462-b1123f40c18f}, created = {2015-11-27T06:27:47.000Z}, file_attached = {false}, profile_id = {4a1667f1-b8f5-302e-b8d7-9a1a4722f0af}, group_id = {413db158-ea19-3e9a-b63d-5cbba6d35544}, last_modified = {2015-11-27T06:27:47.000Z}, read = {false}, starred = {false}, authored = {false}, confirmed = {true}, hidden = {false}, abstract = {The Internet of Things assumes that objects have digital functionality and can be identified and tracked automatically. The main goal of embedded interaction is to look at new opportunities that arise for interactive systems and the immediate value users gain. The authors developed various prototypes to explore novel ways for human-computer interaction (HCI), enabled by the Internet of Things and related technologies. Based on these experiences, they derive a set of guidelines for embedding interfaces into people's daily lives.}, bibtype = {misc}, author = {Kranz, M and Holleis, P and Schmidt, A} }
@article{kwon_empirical_2010, title = {An empirical study of the factors affecting social network service use}, volume = {26}, issn = {0747-5632}, doi = {10.1016/j.chb.2009.04.011}, number = {2}, journal = {Computers in Human Behavior}, author = {Kwon, Ohbyung and Wen, Yixing}, year = {2010}, keywords = {Arts and Humanities (miscellaneous), General Psychology, Human-Computer Interaction, digital identity, social media}, pages = {254--263} }
@InProceedings{Chorianopoulos_2010d, author = {Chorianopoulos, Konstantinos and Fernandez, Francisco Javier Buron and Salcines, Enrique Garcia and de Castro Lozano, Carlos}, title = {Delegating the visual interface between a tablet and a TV}, booktitle = {Proceedings of the International Conference on Advanced Visual Interfaces - AVI '10}, year = {2010}, pages = {418}, address = {New York, New York, USA}, month = {may}, publisher = {ACM Press}, abstract = {The introduction and wide adoption of small and powerful mobile computers, such as smart phones and tablets, has raised the opportunity of employing them into multi-device scenarios and blending the distinction between input and output devices. In particular, the partnership between a personal device and a shared one provides two possible output screens. Then, one significant research issue is to balance the visual interface between two devices with advanced output abilities. Do the devices compete or cooperate for the attention and the benefit of the user? Most notably, how multi-device interaction is appreciated in multi-user scenarios? Previous research has raised and considered the above research issues and questions for dual screen set-ups in the work environment. In our research, we are exploring multi-device user interface configurations in the context of a leisure environment and for entertainment applications. Our objective is to provide interaction possibilities that are more than the sum of the parts.}, doi = {10.1145/1842993.1843096}, url_Paper = {Chorianopoulos_2010d.pdf}, isbn = {9781450300766}, keywords = {TV,design,evaluation,human-computer interaction,interaction,multimedia,tablet,ubiquitous computing}, mendeley-tags = {human-computer interaction,multimedia,ubiquitous computing}, url = {http://portal.acm.org/citation.cfm?id=1842993.1843096} }
@article{kirschner_facebook_2010, title = {Facebook® and academic performance}, volume = {26}, issn = {0747-5632}, url = {https://frodon.univ-paris5.fr:443/http/www.sciencedirect.com/science/article/B6VDC-4YW37BR-2/2/89a3658ec53ae04052a9e04790a8fc6d}, doi = {DOI: 10.1016/j.chb.2010.03.024}, number = {6}, journal = {Computers in Human Behavior}, author = {Kirschner, Paul A. and Karpinski, Aryn C.}, year = {2010}, note = {m2, ok, Online Interactivity: Role of Technology in Behavior Change}, keywords = {Academic, performance}, pages = {1237 -- 1245}, }
@article{ id = {a149e55b-21ed-350a-a6a9-df9c0fd10795}, title = {The co-evolution of taxi drivers and their in-car navigation systems}, type = {article}, year = {2010}, identifiers = {[object Object]}, keywords = {co-evolution,human-computer interaction,qualitative field study,satellite navigation systems}, created = {2013-03-26T11:06:32.000Z}, pages = {424-434}, volume = {6}, websites = {http://linkinghub.elsevier.com/retrieve/pii/S1574119210000350}, month = {8}, file_attached = {true}, profile_id = {e7747f90-d244-30f0-8ab8-040e1ce7fcca}, group_id = {eb547711-a9bc-34f4-87c4-f439162c9d1c}, last_modified = {2014-10-22T14:13:48.000Z}, read = {true}, starred = {false}, authored = {false}, confirmed = {true}, hidden = {false}, citation_key = {Girardin2010}, bibtype = {article}, author = {Girardin, Fabien and Blat, Josep}, journal = {Pervasive and Mobile Computing}, number = {4} }
@article{dohn_web_2009, title = {Web 2.0: {Inherent} tensions and evident challenges for education}, volume = {4}, issn = {1556-1607}, doi = {10.1007/s11412-009-9066-8}, number = {3}, journal = {International Journal of Computer-Supported Collaborative Learning}, author = {Dohn, Nina Bonderup}, year = {2009}, keywords = {Education, Human-Computer Interaction, benefits, social media, web 2.0}, pages = {343--363} }
@article{yetim_deliberation_2009, title = {A deliberation theory-based approach to the management of usability guidelines}, volume = {12}, issn = {15214672}, abstract = {Designing interaction entails addressing multiple issues and challenges, ranging from the techni- cal and economic to the legal and ethical. Usability guidelines recommend or prescribe courses of action and thus play a significant role in designing usable systems. This paper argues that ap- proaches to guidelines need to support processes of deliberation and tradeoff and suggests a de- liberation theory-informed model for the organization of guidelines. The model integrates con- cepts from Habermas' discourse theory and Toulmin's model of argumentation to categorize and represent guidelines. In addition, the paper presents two explorative studies conducted to under- stand the representational fit of the suggested categories to the domain of guidelines. The studies specifically consider the characteristics of coverage and encodability and also explore difficult cases. Finally, a brief summary of the usability evaluation results of the prototype that instantiated the proposed model is provided. This paper contributes to research and praxis by providing a the- ory-based model and a prototype for the management of guidelines.}, journal = {Informing Science}, author = {Yetim, Fahri}, year = {2009}, keywords = {Deliberation, Discourse theory, Human factors in information systems, Human-computer interaction, Reflective design, Tool, Usability categories, Usability guidelines}, pages = {73--104}, }
@article{toit2008, author = {Herzberg, Amir and Jbara, Ahmad}, title = {Security and Identification Indicators for Browsers Against Spoofing and Phishing Attacks}, journal = {ACM Trans. Internet Technol.}, issue_date = {September 2008}, volume = {8}, number = {4}, month = oct, year = {2008}, issn = {1533-5399}, pages = {16:1--16:36}, articleno = {16}, numpages = {36}, doi = {10.1145/1391949.1391950}, acmid = {1391950}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {Human-computer interaction, Web spoofing, phishing, secure usability}, }
@inproceedings{ pierce_energy_2008, address = {New York, {NY}, {USA}}, series = {{OZCHI} '08}, title = {Energy Aware Dwelling: A Critical Survey of Interaction Design for Eco-visualizations}, isbn = {0-9803063-4-5}, shorttitle = {Energy Aware Dwelling}, url = {http://doi.acm.org/10.1145/1517744.1517746}, doi = {10.1145/1517744.1517746}, abstract = {Eco-visualizations ({EVs}) are any kind of interactive device targeted at revealing energy use in order to promote sustainable behaviours or foster positive attitudes towards sustainable practices. There are some interesting, informative, highly creative, and delightful {EVs} now available. This paper provides a critical survey of several noteworthy {EVs} and classifies them in terms of scale and contexts of use. The paper attempts to provide a foundation for practitioners to design new {EVs} in more varied scales and contexts and for researchers to continue to refine understandings of how effective {EVs} can be and how {EVs} can be made to be more effective. The paper describes (i) feedback types and use-contexts for classifying {EVs} and (ii) strategies for designing effective {EVs}.}, urldate = {2014-10-06TZ}, booktitle = {Proceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habitus and Habitat}, publisher = {{ACM}}, author = {Pierce, James and Odom, William and Blevis, Eli}, year = {2008}, keywords = {Energy conservation, Feedback, human-computer interaction, interaction design, sustainability}, pages = {1--8} }
@book{lin_haptic_2008, address = {Wellesley, Mass}, title = {Haptic rendering: foundations, algorithms, and applications}, isbn = {978-1-56881-332-5}, shorttitle = {Haptic rendering}, publisher = {A.K. Peters}, collaborator = {Lin, Ming C. and Otaduy, Miguel A.}, year = {2008}, keywords = {Computer algorithms, Human-computer interaction, Touch} }
@InProceedings{Kim2008_emg, Title = {EMG-based Hand Gesture Recognition for Realtime Biosignal Interfacing}, Author = {Kim, J. and Mastnik, S. and Andr{\'e}, E.}, Booktitle = {Proceedings of the International Conference on Intelligent User Interfaces}, Year = {2008}, Pages = {30--39}, Abstract = {In this paper the development of an electromyogram (EMG) based interface for hand gesture recognition is presented. To recognize control signs in the gestures, we used a single channel EMG sensor positioned on the inside of the forearm. In addition to common statistical features such as variance, mean value, and standard deviation, we also calculated features from the time and frequency domain including Fourier variance, region length, zerocrosses, occurrences, etc. For realizing real-time classification assuring acceptable recognition accuracy, we combined two simple linear classifiers (k-NN and Bayes) in decision level fusion. Overall, a recognition accuracy of 94% was achieved by using the combined classifier with a selected feature set. The performance of the interfacing system was evaluated through 40 test sessions with 30 subjects using an RC Car. Instead of using a remote control unit, the car was controlled by four different gestures performed with one hand. In addition, we conducted a study to investigate the controllability and ease of use of the interface and the employed gestures.}, Acmid = {1378778}, Doi = {10.1145/1378773.1378778}, ISBN = {978-1-59593-987-6}, Keywords = {biosignal analysis, electromyogram, gesture recognition, human-computer interaction, neural interfacing}, Location = {Gran Canaria, Spain}, Numpages = {10}, Owner = {jf2lin}, Review = {Uses a single channel EMG to recognize 4 hand gestures to control an RC car. Motion is fist, fist with wrist left, fist with wrist right, and fist with wrist down. - threshold on rms to determine the start and end fo a signal (segmentation) - for actual signal, used max, min, mean value, variance, signal length, rms. for freq signals, used fundamental freq, fourier varience, region length (partial length of the spectrum containing greater magnitude than mean value of total fourier coeff), percentage to max value, zero crossing - used knn and bayes classifier, and has voting schemes btwn them - calibrate with 10-20 samples of each gesture for subject. above 90% acc in classification rate. 30 subjects.}, Timestamp = {2015.04.03} }
@inproceedings{pellan_scalable_2008, address = {New York, New York, USA}, title = {Scalable multimedia documents for digital radio}, isbn = {978-1-60558-081-4}, url = {http://dl.acm.org/citation.cfm?id=1410140.1410186}, doi = {10.1145/1410140.1410186}, abstract = {00000}, booktitle = {Proceeding of the eighth {ACM} symposium on {Document} engineering - {DocEng} '08}, publisher = {ACM Press}, author = {Pellan, Benoit and Concolato, Cyril}, month = sep, year = {2008}, note = {00000}, keywords = {Human-computer interaction, Interacción hombre-computadora}, pages = {221} }
@article{ title = {DESIGNING HAND GESTURE VOCABULARIES for NATURAL INTERACTION by COMBINING PSYCHO-PHYSIOLOGICAL and RECOGNITION FACTORS}, type = {article}, year = {2008}, keywords = {Hand gesture vocabulary design,comfort,gesture interfaces,hand gesture recognition,human-computer interaction,intuitiveness,learning,memory,multiobjective optimization,psycho-physiological,semantic behavior}, pages = {137-160}, volume = {2}, id = {8b90689d-4402-3e26-9df9-bea3d0139c3a}, created = {2021-06-04T19:36:51.566Z}, file_attached = {false}, profile_id = {f6c02e5e-2d2f-3786-8fa8-871d32fc2b9b}, last_modified = {2021-06-07T19:17:09.482Z}, read = {false}, starred = {false}, authored = {true}, confirmed = {true}, hidden = {false}, private_publication = {false}, abstract = {A need exists for intuitive hand gesture machine interaction in which the machine not only recognizes gestures, but also the human feels comfortable and natural in their execution. The gesture vocabulary design problem is rigorously formulated as a multi-objective optimization problem. Psycho-physiological measures (intuitiveness, comfort) and gesture recognition accuracy are taken as the multi-objective factors. The hand gestures are static and recognized by a vision based fuzzy c-means classifier. A meta-heuristic approach decomposes the problem into two sub-problems: finding the subsets of gestures that meet a minimal accuracy requirement, and matching gestures to commands to maximize the human factors objective. The result is a set of Pareto optimal solutions in which no objective may be increased without a concomitant decrease in another. Several solutions from the Pareto set are selected by the user using prioritized objectives. Software programs are developed to automate the collection of intuitive and stress indices. The method is tested for a simulated car - maze navigation task. Validation tests were conducted to substantiate the claim that solutions that maximize intuitiveness, comfort, and recognition accuracy performance measures can be used as proxies for the minimization task time objective. Learning and memorability were also tested.}, bibtype = {article}, author = {Stern, Helman I. and Wachs, Juan P. and Edan, Yael}, doi = {10.1142/S1793351X08000385}, journal = {International Journal of Semantic Computing}, number = {1} }
@phdthesis{reis_centrando_2007, type = {Dissertação de {Mestrado}}, title = {Centrando a arquitetura de informação no usuário}, url = {http://www.teses.usp.br/teses/disponiveis/27/27151/tde-23042007-141926/}, abstract = {O presente trabalho analisa as metodologias de projeto de arquitetura de informação de websites sob o foco das abordagens de Design Centrado no Usuário da Ciência da Informação e da Interação Humano-Computador. A metodologia adotada foi uma revisão da literatura, para formular um quadro de referência para análise das metodologias de projetos de arquitetura de informação, e duas pesquisas de campo. A primeira pesquisa foi quantitativa, baseada em um questionário on-line, e teve por objetivo levantar o perfil do arquiteto de informação das listas de discussões brasileiras. A segunda pesquisa foi qualitativa e seguiu a abordagem do Sense-making, tendo como objetivo levantar as dificuldades, técnicas e metodologias encontradas nos projetos de arquitetura de informação de websites. Como resultado da revisão da literatura foi formulado um quadro de referência composto de cinco fases (Pesquisa, Concepção, Especificação, Implementação e Avaliação). Os princípios das abordagens de Design Centrado no Usuário são aplicados nas duas fases iniciais, sendo que na primeira é aplicada a abordagem da Ciência da Informação, e na segunda a abordagem da Interação Humano-Computador. A primeira pesquisa de campo retratou um profissional jovem, que vive nos grandes centros metropolitanos, com formação predominante na área de Humanas e que desenvolveu seus conhecimentos sobre Arquitetura de Informação de maneira autodidata. Quase metade deles não segue qualquer metodologia nos seus projetos e, entre os que seguem, a maioria utiliza uma metodologia própria. A segunda pesquisa mostrou que os arquitetos de informação experientes adotam uma metodologia nos seus projetos e dedicam mais atenção às três primeiras fases do quadro de referência (Pesquisa, Concepção e Especificação). As metodologias vistas na prática não seguem a abordagem de Design Centrado no Usuário da Ciência da Informação, pois raramente são feitas pesquisas com usuários. Com relação à abordagem da Interação Humano-Computador, ela é pouco seguida porque os contratantes desconhecem a importância dos testes de usabilidade e porque os arquitetos não dominam as técnicas desses testes mais adequadas à Arquitetura de Informação. Com relação às dificuldades enfrentadas nos projetos, foram identificados três focos: o contratante, o próprio trabalho de Arquitetura de Informação e o contexto tecnológico em que o website está inserido, sendo o primeiro o mais citado. Conclui-se que as metodologias de projetos de arquitetura de informação precisam evoluir na adoção das abordagens de Design Centrado no Usuário, para que consigam produzir websites que satisfaçam plenamente as necessidades dos usuários, e nas formas de avaliar os resultados, para verificar se os objetivos dos projetos foram plenamente alcançados.}, language = {pt-br}, urldate = {2014-10-31}, school = {Universidade de São Paulo}, author = {Reis, Guilhermo Almeida dos}, month = mar, year = {2007}, keywords = {Arquitetura de informação, Design Centrado no Usuário, Human-Computer Interaction, Information Architecture, Interação homem-computador, Sense-making, Sense-making., User Center Design, Web Design, Websites - Design, Websites - Usabilidade, Websites - Usability}, }
@phdthesis{ reis_centrando_2007, type = {text}, title = {Centrando a arquitetura de informação no usuário}, url = {http://www.teses.usp.br/teses/disponiveis/27/27151/tde-23042007-141926/}, abstract = {O presente trabalho analisa as metodologias de projeto de arquitetura de informação de websites sob o foco das abordagens de Design Centrado no Usuário da Ciência da Informação e da Interação Humano-Computador. A metodologia adotada foi uma revisão da literatura, para formular um quadro de referência para análise das metodologias de projetos de arquitetura de informação, e duas pesquisas de campo. A primeira pesquisa foi quantitativa, baseada em um questionário on-line, e teve por objetivo levantar o perfil do arquiteto de informação das listas de discussões brasileiras. A segunda pesquisa foi qualitativa e seguiu a abordagem do Sense-making, tendo como objetivo levantar as dificuldades, técnicas e metodologias encontradas nos projetos de arquitetura de informação de websites. Como resultado da revisão da literatura foi formulado um quadro de referência composto de cinco fases (Pesquisa, Concepção, Especificação, Implementação e Avaliação). Os princípios das abordagens de Design Centrado no Usuário são aplicados nas duas fases iniciais, sendo que na primeira é aplicada a abordagem da Ciência da Informação, e na segunda a abordagem da Interação Humano-Computador. A primeira pesquisa de campo retratou um profissional jovem, que vive nos grandes centros metropolitanos, com formação predominante na área de Humanas e que desenvolveu seus conhecimentos sobre Arquitetura de Informação de maneira autodidata. Quase metade deles não segue qualquer metodologia nos seus projetos e, entre os que seguem, a maioria utiliza uma metodologia própria. A segunda pesquisa mostrou que os arquitetos de informação experientes adotam uma metodologia nos seus projetos e dedicam mais atenção às três primeiras fases do quadro de referência (Pesquisa, Concepção e Especificação). As metodologias vistas na prática não seguem a abordagem de Design Centrado no Usuário da Ciência da Informação, pois raramente são feitas pesquisas com usuários. Com relação à abordagem da Interação Humano-Computador, ela é pouco seguida porque os contratantes desconhecem a importância dos testes de usabilidade e porque os arquitetos não dominam as técnicas desses testes mais adequadas à Arquitetura de Informação. Com relação às dificuldades enfrentadas nos projetos, foram identificados três focos: o contratante, o próprio trabalho de Arquitetura de Informação e o contexto tecnológico em que o website está inserido, sendo o primeiro o mais citado. Conclui-se que as metodologias de projetos de arquitetura de informação precisam evoluir na adoção das abordagens de Design Centrado no Usuário, para que consigam produzir websites que satisfaçam plenamente as necessidades dos usuários, e nas formas de avaliar os resultados, para verificar se os objetivos dos projetos foram plenamente alcançados.}, language = {pt-br}, urldate = {2014-10-31TZ}, school = {Universidade de São Paulo}, author = {Reis, Guilhermo Almeida dos}, month = {March}, year = {2007}, note = {00009 Dissertação de Mestrado bibtex: Reis2007}, keywords = {Arquitetura de informação, Design Centrado no Usuário, Human-Computer Interaction, Information Architecture, Interação homem-computador, Sense-making, Sense-making., User Center Design, Web Design, Websites - Design, Websites - Usabilidade, Websites - Usability} }
@book{haller_emerging_2007, address = {Hershey}, title = {Emerging technologies of augmented reality: interfaces and design}, isbn = {1-59904-066-2}, shorttitle = {Emerging technologies of augmented reality}, publisher = {Idea Group Pub}, collaborator = {Haller, Michael and Billinghurst, Mark and Thomas, Bruce}, year = {2007}, keywords = {Human-computer interaction, User interfaces (Computer systems), Virtual reality} }
@InProceedings{Willis_2007, author = {Willis, Katharine S. and Chorianopoulos, Konstantinos and Struppek, Mirjam and Roussos, George}, title = {{Shared encounters workshop}}, booktitle = {CHI '07 extended abstracts on Human factors in computing systems - CHI '07}, year = {2007}, series = {CHI '07}, pages = {2881--2884}, address = {New York, New York, USA}, publisher = {ACM Press}, abstract = {Our everyday lives are characterised by encounters, some are fleeting and ephemeral and others are more enduring and meaningful exchanges. Shared encounters are the glue of social networks and have a socializing effect in terms of mutual understanding, empathy, respect and thus tolerance towards others. The quality and characteristics of such encounters are affected by the setting, or situation in which they occur. In a world shaped by communication technologies, non-place-based networks often coexist alongside to the traditional local face-to-face social networks. As these multiple and distinct on and off-line communities tend to carry out their activities in more and more distinct and sophisticated spaces, a lack of coherency and fragmentation emerges in the sense of a shared space of community. Open public space with its streets, parks and squares plays an important role in providing space for shared encounters among and between these coexisting networks. Mobile and ubiquitous technologies enable social encounters located in public space, albeit not confined to fixed settings, whilst also offering sharing of experiences from non-place based networks. We will look at how to create or support the conditions for meaningful and persisting shared encounters. In particular we propose to explore how technologies can be appropriated for shared interactions that can occur spontaneously and playfully and in doing so re-inhabit and connect place-based social networks.}, doi = {10.1145/1240866.1241101}, url_Paper={Willis_2007.pdf}, isbn = {9781595936424}, keywords = {community,encounter,human-computer interaction,interaction,mobile and ubiquitous technologies,shared,situated,space,ubiquitous computing}, mendeley-tags = {community,human-computer interaction,ubiquitous computing}, url = {http://doi.acm.org/10.1145/1240866.1241101}, }
@inproceedings{buxton_interaction_2005, address = {New York, NY, USA}, series = {{CHI} {EA} '05}, title = {Interaction at {Lincoln} {Laboratory} in the 1960's: {Looking} {Forward} – {Looking} {Back}}, isbn = {1-59593-002-7}, shorttitle = {Interaction at {Lincoln} {Laboratory} in the 1960's}, url = {http://doi.acm.org/10.1145/1056808.1056864}, doi = {10.1145/1056808.1056864}, urldate = {2015-12-26TZ}, booktitle = {{CHI} '05 {Extended} {Abstracts} on {Human} {Factors} in {Computing} {Systems}}, publisher = {ACM}, author = {Buxton, William and Baecker, Ron and Clark, Wesley and Richardson, Fontaine and Sutherland, Ivan and Sutherland, W.R. "Bert" and Henderson, Austin}, year = {2005}, note = {00008}, keywords = {Human-computer interaction, Lincoln laboratory, TX-2, computer graphics history, interaction history}, pages = {1162--1167} }
@article{ title = {Speech-gesture driven multimodal interfaces for crisis management}, type = {article}, year = {2003}, keywords = {crisis management,design,dialogue design,gesture recognition,gis,human motion,human-computer interaction,human-computer interaction (HCI),models,multimodal fusion,multimodal interface,propagation,recognition,speech recognition and usability study,tracking,user-interface,vision}, volume = {91}, websites = {http://www.geovista.psu.edu/publications/maceachren/Sharma_IEEE_03.pdf}, id = {5908bcf6-22a5-3e29-9da8-b82e4ff1c782}, created = {2018-05-29T14:06:00.602Z}, file_attached = {false}, profile_id = {6d8d7993-9618-3f6c-983a-9f6761313797}, group_id = {4f1d95d1-59ee-3ce8-85ce-055cfae2da74}, last_modified = {2018-05-29T14:06:00.602Z}, read = {false}, starred = {false}, authored = {false}, confirmed = {true}, hidden = {false}, citation_key = {13929}, source_type = {article}, notes = {DAVE_G}, private_publication = {false}, abstract = {Emergency response requires strategic assessment of risks, decisions, and communications that are time critical while requiring teams of individuals to have fast access to large volumes of complex information and technologies that enable tightly coordinated work. The access to this information by crisis management teams in emergency operations centers can be facilitated through various human-computer interfaces. Unfortunately, these interfaces are hard to use, require extensive training, and often impede rather than support teamwork. Dialogue-enabled devices, based on natural, multimodal interfaces, have the potential of making a variety of information technology tools accessible during crisis management. This paper establishes the importance of multimodal interfaces in various aspects of crisis management and explores many issues in realizing successful speech-gesture driven, dialogue-enabled interfaces for crisis management.This paper is organized in five parts. The first part discusses the needs of crisis management that can be potentially met by the development of appropriate interfaces. The second part discusses the issues related to the design and development of multimodal interfaces in the context of crisis management. The third part discusses the state of the art in both the theories and practices involving these human-computer interfaces. In particular it describes the evolution and implementation details of two representative systems, Crisis Management (XISM) and Dialog Assisted Visual Environment for Geoinformation (DAVE_G). The fourth part speculates on the short-term and long-term research directions that will help addressing the outstanding challenges in interfaces that support dialogue and collaboration. Finally, the fifth part concludes the paper.}, bibtype = {article}, author = {Sharma, R and Yeasin, M and Krahnstoever, N and Rauschert, I and Cai, G and Brewer, I and Alan, M MacEachren and Sengupta, K}, journal = {Proceedings of the IEEE} }
@InProceedings{Chorianopoulos_2003b, author = {Chorianopoulos, Konstantinos and Spinellis, Diomidis}, title = {{Usability design for the home media station}}, booktitle = {Proceedings of the 10th HCI International Conference}, year = {2003}, pages = {439--443}, abstract = {Abstract A different usability design approach is needed for the emerging class of home infotainment appliances, collectively referred to as the home media station (HMS). Mass- media theory, consumer electronics engineering, content creation and content distribution ... $\backslash$n}, url_Paper={Chorianopoulos_2003b.pdf}, keywords = {human-computer interaction,multimedia}, mendeley-tags = {human-computer interaction,multimedia}, url = {http://www.dmst.aueb.gr/dds/pubs/conf/2003-HCI-HMS/html/CS03.pdf}, }
@book{sherman_understanding_2003, address = {Amsterdam ; Boston}, series = {Morgan {Kaufmann} series in computer graphics and geometric modeling}, title = {Understanding virtual reality: interface, application, and design}, isbn = {1-55860-353-0}, shorttitle = {Understanding virtual reality}, publisher = {Morgan Kaufmann Publishers}, author = {Sherman, William R. and Craig, Alan B.}, year = {2003}, keywords = {Human-computer interaction, Virtual reality} }
@InProceedings{Chorianopoulos_2003d, author = {Chorianopoulos, Konstantinos and Lekakos, George and Spinellis, Diomidis}, title = {{The Virtual Channel Model for Personalized Television}}, booktitle = {Proceedings of the 1st European Conference on Interactive TV (EuroITV 2003)}, year = {2003}, pages = {9}, abstract = {This research is based on the realization that the desktop computing paradigm is not appropriate for television, because it is adapted to fundamentally different user aspirations and activities. Instead, the virtual channel is proposed as a model that explains the proper design of user access to personalized television programming. The virtual channel is a model that aids the organization and dynamic presentation of television programming from a combination of live broadcasts, prerecorded content and Internet resources at each set-top box. In this paper, we describe two applications that have been used to validate the virtual channel model. We have employed the properties of the virtual channel model into the design of personalized television advertising and interactive music video clip programming. Finally, we describe an ActiveX control that implements a core set of the virtual channels features.}, url_Paper={Chorianopoulos_2003d.pdf}, keywords = {TV,advertising,alternative computing,broadcast,design model,human-computer interaction,interactive,interactive television,media technology,multimedia,music video clips,need,personalization,usability,user model}, mendeley-tags = {TV,advertising,broadcast,human-computer interaction,interactive,media technology,multimedia}, url = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.2.5918{\&}rep=rep1{\&}type=pdf}, }
@inProceedings{ title = {Measuring Internet Audiences and Usability of an Online Course}, type = {inProceedings}, year = {2001}, keywords = {distance learning,human,human-computer interaction,online communities,server logs analysis,web usability}, pages = {20-23}, city = {Dallas, TX}, id = {930887ff-cdac-3d96-9559-faaa70272c59}, created = {2011-03-07T22:03:05.000Z}, accessed = {2011-03-07}, file_attached = {true}, profile_id = {2563e86a-9b56-3ade-921e-b5933ef212f4}, last_modified = {2017-03-25T07:20:08.584Z}, read = {true}, starred = {false}, authored = {true}, confirmed = {true}, hidden = {false}, citation_key = {Zaphiris2001g}, country = {USA}, private_publication = {false}, abstract = {In this paper, we attempt to outline a variety of methods used to construct a richer conception of the audience of a modern Greek online course. First an analysis of the design methodology employed in this specific case study is provided and then examples of how valuable usability information can be extracted from the log files are presented. Conclusions, related to the analysis of the log files, about the usability of the course are also provided.}, bibtype = {inProceedings}, author = {Zaphiris, Panayiotis and Zacharia, Giorgos}, booktitle = {Proceedings of IIE Annual Conference} }
@article{ title = {Computational Support for Collective Creativity}, type = {article}, year = {2000}, identifiers = {[object Object]}, keywords = {computer support for collective creativity,human-computer interaction,knowledge-based approaches,visual images in creative insight}, pages = {451-458}, volume = {13}, websites = {http://linkinghub.elsevier.com/retrieve/pii/S0950705100000691}, month = {12}, id = {34c940f9-49ab-3e42-818e-8d9b3a35bac0}, created = {2013-08-02T11:03:59.000Z}, file_attached = {true}, profile_id = {2b906273-dc25-3aff-9eef-94e37f452149}, group_id = {ab695928-535d-3373-a630-70913ea6b675}, last_modified = {2013-09-05T16:54:30.000Z}, tags = {Creativity,Creativity support tools}, read = {true}, starred = {false}, authored = {false}, confirmed = {true}, hidden = {false}, citation_key = {Nakakoji2000}, abstract = {The goal of oru research is to develop computer systems that support designers' collective creativity; such systems support individual creative aspects in design through the use of representations created by others in the community. We have developed two systems, IAM-eMMa and EVIDII, that both aim at supporting designers in finding visual images that would be useful for their creative design task. IAM-eMMa uses knowledge-based rules, which are constructed by other designers, to retrieve images related to a design task, and infers the underlying "rationale" when a designer chooses one of the images. EVIDII allows designers to associate affective words and oimages, and then shows several visual representations of the relationships among designers, images and words. By observing designers interacting with the two systems, we have identified that systems for supporting collective creativity need to be based on design knowledge that: (1) is contextualized; (2) is respectable and trustful; and (3) enables "appropriatoin" of a design task.}, bibtype = {article}, author = {Nakakoji, K and Yamamoto, Y and Ohira, M}, journal = {Knowledge-Based Systems}, number = {7-8} }
@inproceedings{Billsus:99, address = {Seattle}, title = {A {Personal} {News} {Agent} that {Talks}, {Learns} and {Explains}}, url = {http://wwwis.win.tue.nl/asum99/billsus.html}, abstract = {Research on intelligent information agents has recently attracted much attention. As the amount of information available online grows with astonishing speed, people feel overwhelmed navigating through today's information and media landscape. Information overload is no longer just a popular buzzword, but a daily reality for most of us. This leads to a clear demand for automated methods, commonly referred to as intelligent information agents, that locate and retrieve information with respect to users individual preferences. As intelligent information agents aim to automatically adapt to individual users, the development of appropriate user modeling techniques is of central importance. Algorithms for intelligent information agents typically draw on work from the Information Retrieval (IR) and machine learning communities. Both communities have previously explored the potential of established algorithms for user modeling purposes (Belkin et al. 1997; Webb 1998). However, work in this field is still in its infancy and we see {\textbackslash}emph\{\vphantom{\}}User Modeling for Intelligent Information Access\vphantom{\{}\} as an important area for future research.}, booktitle = {Autonomous {Agents} 99}, author = {Billsus, Daniel and Pazzani, Michael}, year = {1999}, keywords = {Information Agents, human-computer interaction, maschine learning, user modeling}, }
@book{hartmanMakeWearableElectronics2014, location = {{Sebastopol, CA}}, title = {Make: Wearable Electronics}, edition = {First edition}, isbn = {978-1-4493-3651-6}, shorttitle = {Make}, abstract = {Make: Wearable Electronics is intended for those with an interest in physical computing who are looking to create interfaces or systems that live on the body. Perfect for makers new to wearable tech, this book introduces you to the tools, materials, and techniques for creating interactive electronic circuits and embedding them in clothing and other things you can wear. Each chapter features experiments to get you comfortable with the technology and then invites you to build upon that knowledge with your own projects. Fully illustrated with step-by-step instructions and images of amazing creations made by artists and professional designers, this book offers a concrete understanding of electronic circuits and how you can use them to bring your wearable projects from concept to prototype}, pagetotal = {257}, publisher = {{Maker Media}}, date = {2014}, keywords = {Design and construction,Human-computer interaction,Wearable computers,Wearable technology}, author = {Hartman, Kate and Jepson, Brian and Dvorak, Emma and Demarest, Rebecca}, file = {/home/dimitri/Nextcloud/Zotero/storage/PQ5J3ZDC/Make Wearable Electronics Design.pdf}, note = {OCLC: ocn890200431} }
@techreport{ title = {An evaluation of structure-preserving and query-biased summaries in web search tasks}, type = {techreport}, keywords = {automatic summarization,human-computer interaction,information retrieval,natural language processing}, id = {53e96bfb-eb82-3ef8-b22e-b55918f7deec}, created = {2019-10-12T10:52:10.106Z}, accessed = {2019-10-12}, file_attached = {true}, profile_id = {1971c810-6732-3a00-9f6b-d217e1a53071}, group_id = {cbcfbfec-195f-3b99-b6a1-d26e1dd80ff5}, last_modified = {2019-10-12T10:52:10.185Z}, read = {false}, starred = {false}, authored = {false}, confirmed = {false}, hidden = {false}, private_publication = {false}, abstract = {Automatic summarization has started to receive increasing attention in recent years due to the increased amount of information available in electronic form. Especially, summarization techniques can be very useful in improving the effectiveness of information retrieval on the World Wide Web. However, currently available major search engines such as Google show only a limited capability for summarization. We believe that text summarization with more sophisticated techniques can significantly improve the search experience of users. As a novel approach, we propose a query-biased summarization system which incorporates the structure of documents into the summaries. The system also makes use of natural language processing techniques in the summarization process. The effectiveness of the proposed system has been tested on a task-based evaluation.}, bibtype = {techreport}, author = {Pembe, F Canan and Güngör, Tunga} }
@article{hald_motivating_????, title = {Motivating {Users} to {Move} between {Interactive} {Public} {Displays} using {Game} {Progression}}, author = {Hald, Kasper}, keywords = {documentation, embodied agents, human-computer interaction, interactive public displays, methodology, motivation} }