Articles

An honest review: Two researchers share their thoughts on running a complex longitudinal study via Prolific

Hannah Lettmann
|September 24, 2018

Earlier this month, Katia and Hannah had the opportunity to interview two researchers, Daniel Fellman and Liisa Ritakallio, who ran a large-scale longitudinal online experiment on Prolific.

Daniel, a PhD student and Liisa, a research assistant, are both part of the BrainTrain Research Group at Åbo Akademi University in Turku, Finland. Read on to find out about their research, what they think of Prolific, and how they feel about the Open Science Movement!

Hannah: In Spring 2018, you ran a longitudinal study on Prolific. Can you briefly describe what your study was about?

Daniel: Our research goal was to examine the effect of different strategies when training people’s working memory. For those not familiar with the subject, working memory refers to the ability for temporarily maintaining and manipulating incoming information in a readily accessible form, thus serving as the mental workspace for ongoing cognitive activities. The strategies we looked at were both self-generated and externally provided. How, when, and why people use such strategies is an important topic within cognitive psychology, and we do not know much about it (yet).

To answer this research question, we conducted a longitudinal study over 6 weeks, which included a pretest assessment in week 1, an intervention phase during weeks 2-5, and a posttest assessment in week 6. All in all, we had 271 participants who completed the whole study.

Hannah: How did you design your study specifically to be conducted online? What were the challenges?

Daniel: We have run many lab-based cognitive studies before, but we decided to run this one online by using our in-house programmable testing platform. It is simply not possible to get such a large sample size this quickly in a lab. Prolific came up as a great option to get participants from, because the data was reported to be reliable and there seem to be no bots. 🙂

Liisa: However, there were some issues related to our longitudinal design. We had to take into account issues with participants’ internet connection and potential problems with their home computers. So we tried to create a smooth study flow for participants, because we could not guide them in person. To guarantee this, we ran a pilot study with a small number of participants before the actual data collection started. We also tried to reply to participants’ questions quickly and carefully.

Hannah: What did you find in this study? Where is your research going next?

Daniel: We are currently still processing the data, so there's not much to share yet. However, we are convinced that internet-based cognitive training studies are a feasible setup as they allow for much larger samples (i.e., more statistical power) as compared with lab-based studies. This is why we plan to run more online experiments like this one in the future.

Hannah: Had you used Prolific for your research before? How did you hear about us?

Daniel: We first heard about Prolific from our professor who did some research on crowdsourcing sites. Another member in our research group had also used Prolific before. And indeed, we discovered some great benefits of the site, such as the user-friendly interface which makes it easy to manage several experiments. There are also some features that clearly reduce the workload for us researchers, such as the payment procedure and the email system, which allows to send one message to multiple participants at once. Lastly, despite our somewhat tough inclusion criteria, we had no problems to fill up the experiments with participants. Typically, it took less than 12 hours for the experiments to be filled up.

Katia: Did you encounter any problems during your data collection? How did you solve them?

Liisa: To begin with, we had quite specific pre-screening criteria for our study (such as neurological illnesses or neurodevelopment disorders), which are not available on Prolific. Although the support team can develop them for you, it is optional for the participants to respond to these prescreeners. We solved this problem by having our own prescreening study.

Another challenge we ran into: The website offers an auto approval function, which ensures participants with completed submissions are paid after 21 days. Because our study was 6 weeks long, we requested Prolific turn this feature off, so we could approve participants only if they completed all sessions. Unfortunately, this did not seem to work 100 % reliably, and a few participants were paid automatically after 21 days. Luckily, the Prolific support team reacted immediately and transferred the money back to our account when possible.

The longitudinal nature of our study created some issues, two of which we will point out here. First, we found it difficult to choose a ‘study duration’ that reflected the actual nature of the longitudinal study. On the one hand, individual sessions are quite short, but on the other hand the study requires a substantial time commitment due to the multiple sessions. In the end, we set the’study duration’ to the total number of working hours of the entire study, in order to make the hourly payment rate clear. However, this meant the study was perceived as being a single long session, rather than several shorter ones, which may have caused some confusion among our participants. Second, we had a drop-out rate of approximately 17%. Although this drop-out rate wasn’t too high, we wish it had been even lower.

Katia: Let’s zoom out for a moment. Have you heard of the open science movement? If yes, what do you think about it?

Daniel: Yes, we have heard of the open science movement and our whole research group is definitely in favor of it! This is why our study was preregistered. We also plan to upload our data analyses conducted in R to the Open Science Framework.

Katia: What can we do to make your Prolific experience even better?

Liisa: It would be great if the overall technical functioning and compatibility with different browsers and devices would be improved. There is also some development potential related to longitudinal studies specifically, as we have mentioned before. But overall, we are quite happy with our first Prolific experience, the platform is one of the first of its kind and we plan to continue with similar online studies.

Come discuss this blog post with our community!