“How much change do you get from 40$?” – Analyzing and addressing failed questions on social Q&A (original) (raw)
Related papers
Social Q&A and virtual reference—comparing apples and oranges with the help of experts and users
Journal of the American Society for …, 2012
Online question-answering (Q&A) services are becoming increasingly popular among information seekers. We divide them into two categories, social Q&A (SQA) and virtual reference (VR), and examine how experts (librarians) and end users (students) evaluate information within both categories. To accomplish this, we first performed an extensive literature review and compiled a list of the aspects found to contribute to a "good" answer. These aspects were divided among three high-level concepts: relevance, quality, and satisfaction. We then interviewed both experts and users, asking them first to reflect on their online Q&A experiences and then comment on our list of aspects. These interviews uncovered two main disparities. One disparity was found between users' expectations with these services and how information was actually delivered among them, and the other disparity between the perceptions of users and experts with regard to the aforementioned three characteristics of relevance, quality, and satisfaction. Using qualitative analyses of both the interviews and relevant literature, we suggest ways to create better hybrid solutions for online Q&A and to bridge the gap between experts' and users' understandings of relevance, quality, and satisfaction, as well as the perceived importance of each in contributing to a good answer.
Reference & User Services Quarterly
For librarians to continually demonstrate superior and high-quality service, they must meet the needs of current and potential users. One way that librarians have met the needs of users is by expanding their service offerings online via virtual reference services (VRS). This expansion is particularly critical in the current time of COVID-19. To provide high-quality VRS service, librarians can learn from social question-answering (SQA) sites, whose popularity reflect changing user expectations, motivations, use, and assessment of information. Informed by interviews with 51 users and potential users of both platforms this research examines how strengths from SQA can be leveraged in VRS, and what can be learned from SQA practices to reach potential library users. This study represents one of the few comparisons between VRS and SQA that exist in the literature.
Developing a typology of online Q&A models and recommending the right model for each question type
Proceedings of the American Society …, 2012
Although online Q&A services have increased in popularity, the field lacks a comprehensive typology to classify different kinds of services into model types. This poster categorizes online Q&A services into four model types -community-based, collaborative, expert-based, and social. Drawing such a distinction between online Q&A models provides an overview for how these different types of online Q&A models differ from each other and suggests implications for mitigating weaknesses and bolstering strengths of each model based on the types of questions that are addressed within each. To demonstrate differences among these models an appropriate service was selected for each of them. Then, 500 questions were collected and analyzed for each of these services to classify question types into four categories -information-seeking, adviceseeking, opinion-seeking, and non-information seeking. The findings suggest that information-seeking questions appear to be more suitable in either a collaborative Q&A environment or an expert-based Q&A environment, while opinion-seeking questions are more common in community-based Q&A. Social Q&A, on the other hand, provides an active forum for either seeking personal advice or seeking non-information related to either self-expression or self-promotion.
To Ask or Not to Ask, That is The Question: Investigating Methods and Motivations for Online Q&A
ils.unc.edu
The popularity of studies conducted on online Q&A services has grown over the past few years. Here, online Q&A services are defined as facilitating an askeranswerer(s) relationship via the Internet, either asynchronously or synchronously. Online Q&A services have traditionally been divided into two separate servicessocial Q&A (SQA) and Virtual Reference (VR) and studied separately, with lack of integrating mixed methods (e.g. SQA tends to be analyzed quantitatively versus VR, which tends to be analyzed qualitatively) in regard to approach and comparing SQA and VR services to determine whether gaps in one may be used to improve the other. This poster attempts to ameliorate this shortcoming by investigating both of them together using an online survey of 120 users, with a specific focus on the motivations for use of both services to determine (1) who uses online Q&A services and (2) the relationship between use of these services and general web searching behavior.
Question types in social Q&A sites
Social question and answer (Q&A) Web sites field a remarkable variety of questions: while one user seeks highly technical information, another looks to start a social exchange. Prior work in the field has adopted informal taxonomies of question types as a mechanism for interpreting user behavior and community outcomes. In this work, we contribute a formal taxonomy of question types to deepen our understanding of the nature and intent of questions that are asked online. Our taxonomy is grounded in Aristotelian rhetorical theory, and complemented by contributions of leading twentieth century rhetorical theorists. This taxonomy offers a way to differentiate between similar-sounding questions, while remaining flexible enough to encompass the wide range of questions asked online. To ground the taxonomy in reality, we code questions drawn from three popular social Q&A sites, and report on the distributions of several objective and subjective measures.
Research Trends of the Credibility of Information in Social Q&A
Journal of the Korean Society for information Management, 2012
Social Q&A sites such as Yahoo! Answers and Naver Knowledge-iN have become a viable method for information seeking and sharing on the Web. Considering their immense popularity and growing concerns about their validity as information sources, questions about the credibility of the information provided on social Q&As are timely. Therefore, this paper summarizes recent research on credibility related to the social Q&A context, identifies research gaps, and presents a research agenda for future research to advance this newly developing area.
Proceedings of the American Society for Information Science and Technology, 2009
Today's digital reference service environment faces many opportunities as well as a number of threats. This panel presents three different approaches, methods of data collection, and approaches to analyze and examine quality issues in virtual reference (VR) as well as other reference platforms. One threat has to do with sustainability of VR quality in this time of budgetary constraints. Radford and Connaway will present "Thriving on Theory: A New Model for Synchronous Reference Encounters" describing a new research-based model that delineates quality criteria from user and provider perspectives. This model can be applied to virtual as well as traditional reference interactions. Pomerantz, Shah, Oh, and Oh's paper: "The Same, Yet Different: Comparing Studies of 'Traditional' Digital Reference and Social Q&A" compares traditional vs. social question and answering services. Social question and answer (Q&A) services have opened up pluralistic approaches that go beyond professional VR, but also pose threats to quality and professional turf. Gazen examines one of these social Q&A services, Answerbag, in his paper: "When Wrong is Right: Intentionally Bad Answers in a Social Q&A Community" which discusses his analysis of instances in which wrong answers are intentionally offered. Thriving on Theory: A New Model for Synchronous Reference Encounters By Marie L. Radford & Lynn Silipigni Connaway This paper reports on the development of a new theoretical model for synchronous reference encounters that encompasses live chat Virtual Reference (VR) as well as traditional face-toface (FtF) environments. This innovative model focuses attention on the dual dimensions of communication, which are the content (information) and relational (interpersonal)
Faster, better, or both? Looking at both sides of online question‐answering coin
Proceedings of the American Society for …, 2011
Online question-answering (Q&A) services are becoming increasingly popular among information seekers. We divide them in two domains: social Q&A (SQA) and virtual referencing (VR) and ask what the demands and expectations are for both in satisfying information seeking needs. Using more than 30 interviews and their qualitative analysis of both experts (librarians) and end users (students), we present our findings that indicate the mismatch in experts' and end-users' understanding of how and when each service should be used. More importantly, we show how SQA and VR differ in their functionalities and offerings, commenting on their pros and cons, and the ways in which one could create better hybrid solutions for online Q&A services.
Journal of the Korean Society for Library and Information Science, 2015
The efficiency and quality of the social reference sites are being challenged because a large quantity of the questions have not been answered or satisfied for quite a long time. Main goal of this study is to investigate important factors that affect the performance of question triage to relevant answerers in the context of social reference. To achieve the goal, expert finding techniques were used to construct an automated question triage approach to resolve this problem. Furthermore, important factors affecting triage decisions in digital reference were first examined, and extended them to the social reference setting by investigating important factors affecting the performance of automated question triage in the social reference setting. The study was conducted using question-answer pairs collected from Ask Metafilter. For the evaluation, logistic regression analyses were conducted to examine which factors would significantly affect the performance of predicting relevant answerers to questions. The results of the current study have important implications for research and practice in automated question triage for social reference. Furthermore, the results will offer insights into designing user-participatory digital reference systems.