Introduction
Amazon’s Mechanical Turk (“MTurk”), an online labour market, has become popular among social scientists as a source of survey and experimental data (Paolacci & Chandler, 2014). A Google Scholar search by Chandler & Shapiro (2016) finds that approximately 15,000 papers containing the phrase “Mechanical Turk” were published between 2006 and 2014, including hundreds of papers published in top-ranked social science journals using data collected from MTurk. More recently, a Google Scholar search of the phrase “Mechanical Turk” with a search range from 2015 to 2019 returned over 32,000 results.[1] MTurk is not unique in its offering; numerous private companies offer researchers pre-screened research participants. However, these tend to come at a relatively higher cost and provide less control for researchers over participant screening procedures (Wessling et al., 2017). Furthermore, MTurk possesses a large, accessible market that is at least as representative as traditional participant pools (Palan & Schitter, 2018; Paolacci & Chandler, 2014) . However, some studies have found differences between MTurk participants and traditional participants (e.g. Brink, Lee, et al., 2019; Goodman et al., 2013).
MTurk has been used across different accounting research fields including, for example, financial accounting studies that examine investors’ reliance on non-financial information disclosures (Dong, 2017) and the impact on investors’ judgements of corporate social responsibility reports (Elliott et al., 2017); management accounting studies investigating the effect of performance reporting frequency on employee performance (Hecht et al., 2019) and motivations for people to report honestly (Murphy et al., 2019); auditing studies focusing on manager responses to internal audit (Brown & Fanning, 2019) and standards of care required by jurors when assessing auditor negligence (Maksymov & Nelson, 2017); and taxation studies addressing decision makers’ willingness to evade taxes (Brink & White, 2015) and the effect of consumer-directed tax credits on motivating purchasing behaviour (Stinson et al., 2018).
Given the growing popularity of MTurk, the first objective of this paper is to provide a timely review of the use of MTurk in high-calibre empirical accounting research. MTurk is constantly evolving as a data collection method (Hunt & Scheetz, 2019), thus creating a need for regular reviews and considerations of this research data-source. The second objective of the paper is to examine its adoption and suitability for survey-based research in accounting. In general, survey research provides researchers with the ability to tap into relatively complex, multi-faceted phenomena as they occur in their natural setting, while at the same time maintaining the degree of standardisation that is necessary for quantitative analysis and theory testing (Speklé & Widener, 2018). In addition, survey methods are suitable to map current practices in the field, which can provide insights regarding interesting research topics that have yet to be studied (Speklé & Widener, 2018). While survey research is considered the most heavily criticised method in the management accounting field (Young, 1996), the key issue has been how surveys are deployed rather than criticism of the actual research method itself (Van der Stede et al., 2005). However, survey research will remain a commonly applied research method (Van der Stede, 2014) because even critics of the method recognise the power that collective opinions have on the behaviour and functioning of individuals, organisations, and society (Van der Stede et al., 2005). On this basis, it is useful to examine the opportunities and challenges that MTurk presents for survey researchers and consider the directions, if any, that survey research using MTurk data is likely to take in the future.
The next section of this paper briefly overviews how MTurk works. It is not the primary purpose of this paper to detail technical guidance on how to use MTurk but useful information for survey researchers is provided throughout. The overview of Mturk is followed by an analysis of the use of MTurk as a data source in leading accounting journals by journal and year of publication, research methods, purpose of the MTurk data (including whether Mturk is used as a main or supplemental data source), and type of research participant.
Findings show that in addition to a noticeable increase in publications using MTurk, experimental research is the dominant method used in these publications, with survey research having only a limited presence in four mixed-methodology papers. Furthermore, it is found that MTurk is often employed as an additional data source for supplemental empirical tests and for out-of-sample testing of research instruments rather than for main sample testing. The paper also assesses the suitability of MTurk for survey research and discusses operational details relating to validity concerns for survey researchers. In general, we find that the validity concerns for MTurk data are like those from more traditional data sources, although there is an increased risk of “survey impostors”, i.e. survey participants pretending to be someone else.
The paper concludes with a discussion on what the future holds for accounting survey research using MTurk. In the long term, with expected improvements in, and expansion of, online labour markets, this method of data collection is likely to become a mainstream tool for survey researchers. However, there are currently limitations around participant screening and the availability of specialist participant pools. Therefore, MTurk is more likely to be used in the short to medium term as a quick, cost-effective tool for out-of-sample testing of surveys (including pre-testing and pilot-testing) where the final data will be collected using more traditional methods. Furthermore, given the current debate in the management literature on the importance of replicability and reproducibility for credibility of research (e.g. Aguinis et al., 2017; Cuervo-Cazurra et al., 2016), it is foreseeable that MTurk will also become popular with survey researchers as an additional data source for supplemental and/or replication testing. While the ability to replicate a study using MTurk data in a relatively short period of time is attractive for researchers to increase the credibility of their research, we caution that it may have unexpected consequences relating to the willingness of researchers to share ideas in early-stage papers at conferences. Ideas could be empirically tested in a short period of time by other researchers using MTurk data, and this could potentially reduce the contribution of the original paper before its publication.
Overview of MTurk
Amazon Mechanical Turk (“MTurk”) is a crowdsourcing marketplace that makes it easier for individuals and businesses to outsource their processes and jobs to a distributed workforce who can perform these tasks virtually (Amazon, n.d.). These processes and jobs are known on MTurk as human intelligence tasks (HITs), which are broadly defined as tasks that are difficult or impossible for computers to perform (Hunt & Scheetz, 2019). Employers (called requesters) recruit employees (called workers) to complete HITs for remuneration (called a reward) (Hunt & Scheetz, 2019). MTurk is open to both companies and individuals to post a diverse variety of tasks for workers to perform, such as verifying search results for companies like Google, analysing the content of print advertisements, transcribing audio, and taking surveys (Hunt & Scheetz, 2019).
MTurk has a vast range of uses but was never designed specifically for academic research. Fortunately, third-party software programs are available that use MTurk to complete HITs but offer greater functionality, particularly to academic researchers. One example of a third party intermediary useful for academic researchers is TurkPrime. TurkPrime is designed as a research tool whose aim is to improve the quality of the crowdsourcing data collection process and optimise MTurk for researchers (Litman et al., 2017). TurkPrime’s core features are currently available at no additional cost to academic researchers (although there are other additional features that attract fees). For the remainder of this document, unless specifically mentioned otherwise, the use of the term MTurk refers to the use of MTurk on its own or through the TurkPrime platform. The next section examines the use of MTurk in top-ranked accounting publications.
Growth of MTurk in Accounting Research
To address the first objective of this paper and assess the current popularity of MTurk in high-calibre accounting research, we review the accounting journals ranked as 4*,4, and 3 in the Chartered Association of Business Schools 2018 Academic Journal Guide (“The ABS Rankings”),[2] up to the end of 2019 (including “online early”) for the presence of MTurk as a data source (please refer to Appendix A for listing of journals). We use the advanced search function in Google Scholar to filter search results by (i) journal title being reviewed, and (ii) any one of the keywords “MTurk”, “Turk”, or “Turkprime”. For each journal, all articles appearing in the initial search results are subjected to an initial screening to assess their suitability. Several articles are excluded on the basis that they reference but do not use MTurk, or they are general methodological papers, or MTurk metastudies (although any relevant findings from these papers are discussed elsewhere in the paper). Following this initial screening, all remaining articles are reviewed to determine (i) the research methodology/methodologies, (ii) the purpose of MTurk in the research (including whether MTurk is used as a main or supplemental data source), and (iii) the MTurk participant characteristics.
Table 1 summarises the articles using MTurk data by leading accounting journal and year of publication. The findings show that the frequency of use of MTurk-sourced data is increasing rapidly since 2012 (the earliest article in the sample). By 2019, MTurk studies that year had risen to 13, with a further 14 articles online early. To place some additional context on these figures, the journal with the most MTurk studies is The Accounting Review, with 40% of the journal articles. The Accounting Review typically has six issues per year containing 14 original research articles per issue. Therefore, the five published MTurk articles in 2019 account for approximately 6% of all the 2019 published articles in The Accounting Review.
The volume of publications reveals only part of the story; it is also important to examine the research methods used and Table 2 summarises these findings. Of the 55 papers, 49 papers use experiments,[3] either as the sole research method (43 papers) or as part of mixed-methods research (six papers). Four papers use archival methods (Hsieh et al., 2020; Jiang et al., 2016; Jung et al., 2019; Madsen & McMullin, 2019), and two papers use mixed methods including a combination of archival, interview, and survey methods (Cao et al., 2018) and a combination of archival and survey methods (Blankespoor et al., 2017). In total, survey methods appear in just four papers, all involving mixed research methods (Blankespoor et al., 2017; Cao et al., 2018; Carcello et al., 2018; Kadous et al., 2019).
The relatively high volume of experimental methods’ papers is not surprising. Paolacci et al. (2010) identify MTurk early on as an increasingly popular source of experimental data for social scientists because the MTurk population is large, readily accessible, and in relation to the U.S., at least as representative of the U.S. population as more traditional subject pools (e.g., university students). However, the availability of a broad, general population does not necessarily mean that this is the population of interest to accounting researchers. Therefore, we conduct further analysis of the articles using MTurk. Table 3 summarises the type of participants recruited in these studies.
Nearly half of the participant groups fall under the category of “non-specific participant”, where researchers did not require participants to meet any specific technical qualification criteria. This is consistent with the conclusions in other studies: Hunt & Scheetz (2019) believe crowdsourcing platforms are best suited for obtaining average individuals within society; Farrell et al. (2017) conclude that online workers can be suitable proxies in accounting research that investigates the decisions of non-experts; and Buchheit et al. (2019) find that online workers are good research participants when fluid intelligence (defined in their article as general reasoning and problem-solving ability) is needed for reasonably complex experimental tasks in which incoming knowledge is not critical. However, researchers also raise potentially significant issues with the general MTurk population. Buchheit et al. (2018) observe that when compared with a more general population, several studies show that MTurk participants are younger, more computer literate, and more likely to be single, but they are less likely to be homeowners and religiously affiliated. Paolacci & Chandler (2014) also state that workers tend to be younger (about 30 years old), overeducated, underemployed, less religious, and more liberal than the general population. Brink, Lee, et al. (2019) further add that the MTurk population is more willing to justify unethical behaviour, more trusting in others, places lower importance on hard work, and has lower capitalist values. Goodman et al. (2013) find that MTurk samples differ from traditional samples on several dimensions such as personality measures and attention span. They also find that MTurk participants are more likely to use the internet than traditional participants and are less extraverted and have lower self-esteem. Furthermore, in relation to attention span, they find that MTurk participants perform significantly worse when survey length is long (greater than 16 minutes). However, this contrasts with previous research that finds MTurk participants are equally attentive as other participants when surveys are approximately five minutes in duration (Paolacci et al., 2010). In summary, while all the above issues may not affect the conclusions drawn in a research project, they are important considerations in the research design phase.[4]
Returning to Table 3, non-professional investors are the second most popular type of research participant used in the MTurk studies examined. However, relatively few publications provide detailed insights into the research project definition or screening criteria used to recruit “non-professional investors” on MTurk. Tang & Venkataraman (2018, p. 339) is one of the few exceptions:
“To ensure that participants are reasonable proxies for non-professional investors and possess the knowledge required to complete our experimental task, we use two criteria to screen participants. First, participants must have taken at least two courses in accounting or fin"ance to ensure that they understand the financial context of our study. Second, participants should, at a minimum, understand the difference between quarterly earnings guidance and quarterly earnings reports. To ensure that our participants meet this requirement, we screen them by testing their knowledge on whether quarterly earnings reporting, and earnings guidance, are mandatory or voluntary disclosures. Only participants who correctly answer these questions and meet the accounting/finance course requirements proceed to our experiment.”
Finally, the “other” category in Table 3 includes a diverse set of workers including, for example, experienced chess players (Bentley, 2019), those having business experience with internal auditors (Carcello et al., 2018), and those possessing both crowdfunding and video game experience (Madsen & McMullin, 2019).
A final analysis of the 55 papers examines whether researchers use MTurk as a main or supplemental source of data and Table 4 summarises these findings. In 37 papers (over 65%), MTurk is used as a main data source. In ten studies (over 18%), MTurk is used as a second data source for out-of-sample testing of research instruments. In the eight remaining studies (over 14%), MTurk data is used as a second data source for supplemental empirical tests.
Finally, only two journals outside North America in the sample (Management Accounting Research and Accounting and Business Research) published MTurk papers in the 2012-2019 period and have only published one paper each. This suggests that MTurk may be a more acceptable data collection tool for North American journals. Alternatively, it may reflect that experimental research, the methodology used in the majority of MTurk studies reviewed here, is generally more established in North American journals. It may also reflect that the majority of MTurk workers are based in the United States. Analysing the demographics over the 2019 calendar year,[5] US workers accounted for between 68% and 76% of the Mturk worker population. India is second with between 16% and 19% of the worker population.
In summary, the use of MTurk in high-quality accounting publications is increasing, particularly for experimental research, with a relatively low presence in survey research. Obviously, the low presence of MTurk data in survey studies published in high-quality accounting journals does not infer that MTurk is unsuitable for survey research. However, the finding raises questions whether issues exist with MTurk that may hinder its adoption by survey researchers. The next section addresses this question by discussing the usefulness of MTurk for empirical survey research. Specifically, the main operational details of the platform are considered with regards to the key validity concerns of survey researchers, and potential roadblocks are identified for survey researchers using MTurk.
Assessing MTurk as a Data Source for Survey Research
“Mail surveys are seductive in their apparent simplicity—type up some questions, reproduce them, address them to respondents, wait for returns to come in, and then analyze the answers” (Mangione, 1995, p. 2-3, as cited by Van der Stede et al., 2005). Data collection through on online labour market survey is similarly attractive in terms of speed and ease of access to survey participants: Set up a HIT containing questions, make the HIT available to workers, collect responses and pay workers for the HIT, analyse results. In practice, research data collection will be more complicated. Most researchers will add additional screening procedures, each increasing the complexity of the overall process. Farrell et al. (2017) emphasise the need to reduce the risk of “impostors”, i.e. workers pretending to be who they are not, and “scoundrels”, i.e. workers averting effort and providing false information. Smith et al. (2016) further summarise that issues relating to sample integrity and data quality are the two main concerns of using online panels (groups of research participants). Furthermore, the authors identify that threats to data quality are created by two distinct but potentially overlapping response styles: “Speeders”, where a respondent does not thoroughly read the questions and uses minimal cognitive effort to provide answers that satisfy the question, and “Cheaters”, where a respondent intentionally answers survey questions dishonestly and in a fashion that maximises their opportunity for participation and subsequent rewards.
To respond to these validity threats, and re-narrowing the focus exclusively to survey research, Farrell et al. (2017) recommend that detailed screening of survey participants be carried out ex-ante (before issuing the survey), in-survey (while the survey is in progress), and ex-post (when the data collection is complete). We address each procedure in the following sections. We discuss issues more unique to MTurk in greater detail than issues that are common across all types of surveys. Also, Appendix C summarises the main steps required to use MTurk for survey research.[6]
Ex-ante Screening
Explicit steps must be taken to ensure that participants have the relevant knowledge or experience to participate in a study (Hunt & Scheetz, 2019). In general, the requirement for extensive screening procedures arises because researchers must rely on workers self-selecting into HITs based on workers’ own assessments of whether they meet the HIT criteria or not. This could be especially problematic for surveys where ineligible workers might be enticed by the payments on offer to complete survey HITs. While third party providers offer ex-ante screening procedures at an additional cost (e.g. Qualtrics Panel, SurveyMonkey Audience, TurkPrime Panel), they do not provide researchers with detailed insight into their screening procedures, which could increase validity concerns (Wessling et al., 2017). Wessling et al. (2017) maintain that while these commercial companies claim confidence in their pre-screening, they offer little external verification. Wessling et al. encourage researchers who use such services to monitor and validate the quality of the screening.[7] Typically, ex-ante screening involves the inclusion of screening questions either at the beginning of the survey or in a separate survey. For example, Hunt & Scheetz (2019) include eight unpaid screening questions at the beginning of their survey instrument and terminate participation for workers not answering in the specified manner. In their experience, they find that Institutional Review Boards (IRBs)[8] will allow research designs using unpaid screening questions if workers are informed in the instructions to the HIT that payment depends on successfully answering the screening questions, and that they can return the HIT with no negative impact on their MTurk rating if they do not qualify. Hunt & Scheetz also state that potential worker aversion to unpaid screens has never materially impacted upon either author’s ability to obtain responses.[9]
This two-survey approach asks workers to identify their characteristics when there is no motive to deceive, and then limits the second survey to those workers who have passed the initial screening (Wessling et al., 2017). Buchheit et al. (2018) suggest that if researchers want a particular kind of expertise, then they can ask pointed questions that only experts would be able to answer. In this manner, the risk of falsely claimed expertise is mitigated. Buchheit et al. also suggest that the screening questions should have a wide number of specific response options where only some (or one) meet the participation requirements. This would reduce demand effects by making the ‘‘right’’ choice less transparent and less subject to guessing (Buchheit et al., 2018). In the second stage, researchers could then use an invitation-only HIT (e.g., through TurkPrime) to target those participants whose answers in the first stage meet the screening criteria (Buchheit et al., 2018). As well as creating a longer ‘‘break’’ between screening questions and the primary task, this approach lowers the number of questions required in the second stage, thus reducing the time needed to complete the primary instrument and lowering associated risks of subject distraction or fatigue (Buchheit et al., 2018). However, Wessling et al. (2017) suggest that screening questions from the survey should be re-asked in the second survey. Their rationale is that it is important to control for possible alternative explanations for inconsistent responses between the two stages, such as take/retake reliability error and change in status or character between the two surveys.
Palan & Schitter (2018) highlight an additional screening risk whereby a population of professional survey-takers may be evolving on crowdsourcing platforms like MTurk. This could lead to loss of participant naivety (Palan & Schitter, 2018). The effect of online subjects participating in potentially hundreds of studies has yet to be quantified, but it has the potential to bias results which suffer from practice effects (Chandler et al., 2014). Chandler et al. (2014) recommend that if researchers are concerned about participant naivety, they should, at a minimum, make an effort to uncover if participants have participated in similar studies previously. Specifically related to survey studies, this would involve additional pre-screening questions. Wessling et al. (2017) provide a longer-term recommendation that involves researchers developing their own ongoing MTurk participant panels where researchers, over time, collect information that could be used to classify and build knowledge about respondents.
In-survey Validity Checks
Common MTurk in-survey checks include reverse-coded questions, instructional manipulation checks, and average completion time. None of these checks are unique to MTurk. Reverse-coded questions are common in all types of surveys as a method of detecting acquiescence bias.[10] Instructional Manipulation Checks (IMCs) are also quite common in surveys.[11] However, Hunt & Scheetz (2019) raise another participant naivety issue whereby workers seem to have become aware of these types of checks and now have higher pass rates than traditional study participants. This means researchers should take additional care in interpreting the results of IMCs and make efforts to avoid using more typical forms of IMCs in their survey. However, Peer et al. (2014) conclude that attention-check questions are unnecessary if high-reputation workers are used.
Finally, in relation to average completion time, Elliott et al. (2018) and Brasel et al. (2016) excluded respondents who completed the required task in under a certain amount of time. Some of the dedicated online survey platforms can capture time spent on each screen of the survey or even prohibit participants from progressing until a certain amount of time has passed (Hunt & Scheetz, 2019). Finally, Litman et al. (2017) recommend monitoring the HIT dropout rate, and bounce rate,[12] as they can be important indicators that something may be wrong with the survey instrument.
Ex-post Validation Considerations
In general, ex-post data examination for MTurk surveys and traditional surveys is similar and according to Hair et al. (2017), can be considered as four separate assessments: Missing data assessments, suspicious response patterns, outliers, and data distributions.[13]
However, one ongoing issue relating to online panels is the use of Internet Protocol (IP) addresses as a means of identifying participants (e.g. to check that their location corresponds to the research participant requirements). Dennis et al. (2020) discuss four issues with using IP addresses as a proxy for a person’s identity:
-
The dynamic assignment of IP addresses by Internet Server Providers (ISPs) often allows individuals to obtain new IP addresses on demand.
-
IP addresses identify machines, not individuals; therefore, an individual can use multiple unique machines to obtain multiple unique IP addresses at the same time.
-
Individuals can also use virtual machines on stand-alone servers (e.g., Virtual Private Servers (VPSs) or Virtual Private Networks (VPNs)) to conceal the IP address of the machine they are working on.
-
There is no official database that links IP addresses to specific locations.
In summary, the above issues can result in a single worker completing the same HIT multiple times and/or completing a HIT when they are unqualified e.g., inappropriately using a VPS in the US to make it look like they are a US worker. To address these issues, Dennis et al. (2020) recommend that researchers supplement cutting-edge IP screening procedures with an ex-post analysis of open-ended question style attention checks.[14] This recommendation is based on Dennis et al.'s own empirical analyses where they found that an analysis of open-ended questions was highly effective in uncovering invalid responses.
Finally, if a worker passes initial pre-screening tests, completes the HIT, but fails in-survey or post-survey screening, there will still be an issue over whether they should be paid.[15] While online participants have several motives for participating in studies, incentives is the most cited (followed by curiosity, enjoyment, and participants wanting to have their views heard) (Smith et al., 2016). Buchheit et al. (2018) find no consensus in the literature regarding the compensation of participants who fail screening tests; they observe researchers who provide full payment, partial payment, or no payment to such participants. However, Brasel et al. (2016) rejected payment for participants who completed the study but did not correctly answer at least 90 percent of the comprehension checks included throughout the research instrument. Furthermore, Brink, Eaton, et al. (2019) found that informing participants upfront in the HIT description about the monitoring of responses and application of penalties increased the level of honest reporting in their study.
Concluding Thoughts
The use of MTurk in leading accounting journals is gathering pace, with nearly the same number of articles published/accepted in 2019 in ABS 4*/4/3 journals as the total articles published in the preceding seven years. Experimental research is used in all but six of the 55 articles reviewed and all but two articles are published in North American journals. Given the lack of research using MTurk in journals based outside North America, there is a need for further research to examine its global acceptability as a data collection tool among academic researchers.
Van der Stede et al. (2005) observe that the quality of survey data is as weak as the weakest link in the survey data collection process. Our paper has documented guidance on participant selection and screening issues to mitigate the potential for MTurk to be the weakest link in a study. MTurk’s utility depends on using best practices and carefully considering the issues raised by MTurk’s many evaluators (Buhrmester et al., 2018). Regarding the potential of MTurk data for survey research, one factor that may limit its usefulness in the short term is that MTurk has been most frequently used to date to recruit non-experts. This raises a concern that the ease of accessing certain research participants may drive the type of research questions addressed. For example, in their overview of experimental audit research, Simnett & Trotman (2018) foresee that as audit practitioners become more difficult to access for experiments, audit researchers will move to topics that can use more easily accessible surrogates for auditing (including online participants). The authors see this research as generally being less informative and perceive that it will negatively affect the type of audit research conducted in the future. Also, whether sufficient numbers of more niche “expert” participants are available on the platform is not clear and even if they are, the costs of screening for them may be prohibitive. However, if the use of online labour markets continues to grow, so too will the number and variety of competitor platforms to MTurk. Like any software-adoption decision involving competing products, it may become the norm that researchers carry out their own assessment of the merits of various platforms (against each other and against more traditional sources) to determine the data source that best meets their needs and the resources they have available. Therefore, in the long term, it is still expected that MTurk and other similar platforms are likely to become more mainstream data sources for researchers.
In the short to medium term, it is anticipated that MTurk, as currently available, is more likely to be used as a quick, cost-effective tool for out-of-sample testing of surveys (including pre-testing and pilot-testing) where the final data will be collected using more traditional methods. Given that data collection for an entire study is possible in a matter of hours (Goodman et al., 2013), MTurk might also be suitable as a tool for undergraduate or Masters’ dissertations where project durations are shorter, research objectives are narrower, and contributions more limited. It is also likely that MTurk will become popular as an additional data source for supplemental and/or replication tests. There is a growing debate in the management literature on the importance of reproducibility and replicability for credibility of research (Aguinis et al., 2017; Cuervo-Cazurra et al., 2016). A recent special issue in Strategic Management Journal devoted to replications points to the growing acceptability of replication studies in high ranked journals (Ethiraj et al., 2016). While the ability to replicate a study in a relatively short period of time using MTurk data is a welcome development for many researchers to increase the credibility of research findings, the potential for another researcher in the area to quickly build upon an early-stage paper increases encroachment risk. This may have implications for researchers’ willingness to share ideas in an early-stage paper at conferences given the possibility that ideas could be replicated or built upon, and data collection completed in a period of weeks, thereby potentially reducing the contribution of the original study before its publication.
Overall, MTurk has much potential for empirical survey accounting research. However, researchers need to proceed with caution and demonstrate rigour in considering additional threats to validity that can arise from selection and screening of participants. This paper has provided an overview of key validity concerns which will be useful to survey researchers in this regard. Undoubtedly, online labour platforms will continue to grow in use by empirical accounting researchers.