On August 25, Jennifer Tackett, a psychology professor at Northwestern University, got an odd email in her inbox. It was from PubPeer, an online forum where people share and discuss scientific articles. And it made her a little anxious.
“PubPeer is typically used to point out errors in papers, and no scientist wants to find errors in their published work,” Tackett tells me in an email.
When she clicked through to the PubPeer comment, this is what she saw.
A program called “Statcheck” was writing to inform her it had been double-checking the math in her paper.
Tackett wasn’t the only person to get such an email. Statcheck had scanned 50,000 papers and uploaded the results to the PubPeer comments. And while Tackett’s paper checked out, others had a more unpleasant surprise.
Statcheck is a simple program built to correct simple errors. But it provokes bigger questions about how the field of psychology should go about correcting errors in past work. Because psychologists are starting to find a great many.
Why scientists need robots to check their math
In science, even […]