jsPsych: A JavaScript library for creating behavioral experiments in a Web browser (original) (raw)
Related papers
Gorilla in our Midst: An online behavioral experiment builder
2018
Behavioural researchers are increasingly conducting their studies online to gain access to large and diverse samples that would be difficult to get in a laboratory environment. However, there are technical access barriers to building experiments online, and web-browsers can present problems for consistent timing – an important issue with reaction time-sensitive measures. For example, to ensure accuracy and test-retest reliability in presentation and response recording, experimenters need a working knowledge of programming languages such as JavaScript. We review some of the previous and current tools for online behavioural research, and how well they address the issues of usability and timing. We then present The Gorilla Experiment Builder (gorilla.sc) a fully tooled experiment authoring and deployment platform, designed to resolve many timing issues, and make reliable online experimentation open and accessible to a wider range of technical abilities. In order to demonstrate the plat...
Web-based experiments controlled by JavaScript: An example from probability learning
Behavior Research Methods, Instruments, & Computers, 2002
Probability Learning Birnbaum (2002) compared performance on the classical probability-learning paradigm in experiments conducted in the lab with those conducted via the Web. In Birnbaum's (2002) study, learners predicted which of two abstract events (R1 or R2) would happen next by clicking buttons; they were given feedback as to whether they were right or wrong on each trial. Whereas the typ-189
Behavior Research Methods
The Web is a prominent platform for behavioral experiments, for many reasons (relative simplicity, ubiquity, and accessibility, among others). Over the last few years, many behavioral and social scientists have conducted Internet-based experiments using standard web technologies, both in native JavaScript and using research-oriented frameworks. At the same time, vendors of widely used web browsers have been working hard to improve the performance of their software. However, the goals of browser vendors do not always coincide with behavioral researchers' needs. Whereas vendors want high-performance browsers to respond almost instantly and to trade off accuracy for speed, researchers have the opposite trade-off goal, wanting their browser-based experiments to exactly match the experimental design and procedure. In this article, we review and test some of the best practices suggested by web-browser vendors, based on the features provided by new web standards, in order to optimize animations for browser-based behavioral experiments with high-resolution timing requirements. Using specialized hardware, we conducted four studies to determine the accuracy and precision of two different methods. The results using CSS animations in web browsers (Method 1) with GPU acceleration turned off showed biases that depend on the combination of browser and operating system. The results of tests on the latest versions of GPU-accelerated web browsers showed no frame loss in CSS animations. The same happened in many, but not all, of the tests conducted using requestAnimationFrame (Method 2) instead of CSS animations. Unbeknownst to many researchers, vendors of web browsers implement complex technologies that result in reduced quality of timing. Therefore, behavioral researchers interested in timing-dependent procedures should be cautious when developing browser-based experiments and should test the accuracy and precision of the whole experimental setup (web application, web browser, operating system, and hardware).
DEWEX: A system for designing and conducting Web-based experiments
Behavior Research Methods, 2007
The Development Environment for Web-Based Experiments (DEWEX) is an environment for generating and conducting Web-based experiments either on the Internet or in the laboratory. It has been developed since 2001 within the hypertext research project "User-Oriented Presentation of Information on the Internet," together with the Chemnitz LogAnalyzer . The Chemnitz LogAnalyzer is a free tool for analyzing log files from Web-based experiments (e.g., those conducted with DEWEX). The "heart" of DEWEX is the CGI program nm.cgi, which interprets the folder and document structure of the environment for conducting Web-based experiments. These experiments can be generated with DEWEX even by those with little expertise in programming or in using the Internet and HTML. With the help of DEWEX, the materials for the experiment (i.e., instructions, questionnaires for participants' data, pretests, text or picture materials for different experimental conditions, posttests, and additional questionnaires) can be created, and the order of the presented materials and the assignment of participants to different experimental conditions can be defined. Experimental designs with one or more factors, whether within subjects, between subjects, or mixed, are possible. Those experiments can then be made available on the Web (e.g., by being linked to on an DEWEX is a server-based environment for developing Web-based experiments. It provides many features for creating and running complex experimental designs on a local server. It is freeware and allows for both using default features, for which only text input is necessary, and easy configurations that can be set up by the experimenter. The tool also provides log files on the local server that can be interpreted and analyzed very easily. As an illustration of how DEWEX can be used, a recent study is presented that demonstrates the system's most important features. This study investigated learning from multiple hypertext sources and shows the influences of task, source of information, and hypertext presentation format on the construction of mental representations of a hypertext about a historical event.
Controlled experiments on the web: survey and practical guide
Data Mining and Knowledge Discovery, 2009
The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person's Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.
psiTurk: An open-source framework for conducting replicable behavioral experiments online
Behavior Research Methods, 2015
Online data collection has begun to revolutionize the behavioral sciences. However, conducting carefully controlled behavioral experiments online introduces a number of new of technical and scientific challenges. The project described in this paper, psiTurk, is an open-source platform which helps researchers develop experiment designs which can be conducted over the Internet. The tool primarily interfaces with Amazon's Mechanical Turk, a popular crowd-sourcing labor market. This paper describes the basic architecture of the system and introduces new users to the overall goals. psiTurk aims to reduce the technical hurdles for researchers developing online experiments while improving the transparency and collaborative nature of the behavioral sciences. psiTurk was primarily written by the listed authors at the time the first draft of this paper was constructed (version 2.0.0). However, many people continually contribute to psiTurk's evolving code base and documentation via GitHub. Illustrations by Kylan Larson (http://kylanlarson.com).
Internet experiments: methods, guidelines, metadata
The Internet experiment is now a well-established and widely used method. The present paper describes guidelines for the proper conduct of Internet experiments, e.g. handling of dropout, unobtrusive naming of materials, and pre-testing. Several methods are presented that further increase the quality of Internet experiments and help to avoid frequent errors. These methods include the "seriousness check", "warm-up," "high hurdle," and "multiple site entry" techniques, control of multiple submissions, and control of motivational confounding. Finally, metadata from sites like WEXTOR (http://wextor.org) and the web experiment list (http://wexlist.net/) are reported that show the current state of Internet-based research in terms of the distribution of fields, topics, and research designs used.