Over the past year, members of Transparent Statistics in HCI have worked with people across various disciplines of the HCI community and chairs of the conference (full contributor list below) to update the CHI guides for a successful submission and for reviewing. These new instructions aim to increase the transparency and openness of several facets of research, including: decisions performed materials created or used data collected Making these facets of research accessible can enable reviewers and readers to better assess HCI research, facilitate industry to build upon and apply HCI research, and help HCI researchers to more efficiently use past research to make new advances.
The current draft of the transparent statistics guidelines has an FAQ and exemplar on effect sizes. During a meeting we ran at CHI to collect feedback, a participant strongly objected to our use of the term “effect size” in the guidelines. This prompted me to investigate further to make sure we haven’t missed anything in deciding the terminology. Here is what I found. We should report effect sizes. But what are effect sizes?
This post marks the transition of the Transparent Statistics in HCI website to blogdown. This is an exciting transition for several reasons: The website is no longer backed by some hacked-together scripts I made. The website can be contributed to by others via Github. (Most excitingly), we will be able to begin blogging about transparent statistics in HCI! To that last point, we will be soliciting posts from members of the community on topics related to transparent statistical communication in human-computer interaction.