I created a draft submission for internal review board approval of our plan to hand out some questionnaires at the DUG meeting. Hopefully the findings will be published, so it’s important to go about things properly.
Specifically, I’ll be showing alternate video styles (more or fewer effects, such as animations and zooming) and asking for the audience’s preferences. The question becomes: if I’m showing 2 or 3 videos and asking the audience to tell me which style of video they prefer, do I vary the content from video to video? The biologist in me says no, wanting to reduce variables. But then, the audience could say they learned better from the second and third styles, rather than the first, just because that’s the second and third time around they’re seen the information. Or they could prefer the the first video they see simply because the information is no longer novel and therefore no longer interesting.
And so, I created a one-question poll that WordPress won’t let me embed. Please click this finely-crafted link to vote in the poll, or post your thoughts below as a comment. Thank you!
I also worked on scripts for the tutorial videos for ONEMercury. It’s funny how carefully you examine details and how deeply you learn something when charged with teaching others. I’ve had several questions about the exact function of some of the tool’s options; these have been passed on to the ONEMercury team.
My goal of very short videos is going to be a challenge – even after editing my scripts for length, some of them have a recorded speaking time of just over a minute. I’d like to keep content to less than a minute, so more cutting is order. I love to be precise and to provide lots of information, so this is very good practice at being concise!
By next week I’ll have full video examples of the styles I’d like test at the DUG.
Hi Heather,
It should be interesting to get the chance to see what people think of some of the screencasts you’ve been working on!
It seems like it will be difficult to get the type of data you want unless you have the chance to do the presentation at least twice. To have the three independent variables you mention here — style (more or less dynamic), content, and order, you will need to have more groups than two (probably ideally, something like a booth where you can assign participants individually to different conditions). If you’d like to discuss ideas related to how to set this up as a research study, please get in touch with me and I’d be happy to chat.
One other thing I wanted to mention is that you might want to consider asking people something other than which style they liked or thought they learned most from. There are a number of studies looking at animated learning materials or data visualizations that find that what people prefer is not always what helps them perform best on objective tasks. Rather than (or in addition to) asking people which style they prefer, you might want to ask a question that requires them to use/remember the information presented during the parts of the screencast that were different in the two styles. Response time is a good thing to measure, too, if you’re doing the survey under conditions (e.g., on a computer as part of a booth) where you can track that.
Happy to chat more if you’re interested.
Thanks for that extensive feedback! I’ve sent you an email directly, but I will address the second point here: the participants will probably having varying experience with the tool (in this case, ONEMercury); some may even be experts with the tool. We contemplated creating videos for a completely unrelated tool and then having a test question of sorts, but ultimately decided it might be more meaningful to have feedback on the sort of video I’m actually producing. Still, it’s something to think about, especially response time.