Creativity assessment commonly uses open-ended divergent thinking tasks. The typical methods for scoring these tasks (uniqueness scoring and subjective ratings) are time-intensive, however, so it is impractical for researchers to include divergent thinking as an ancillary construct. The present research evaluated snapshot scoring of divergent thinking tasks, in which the set of responses receives a single holistic rating. We compared snapshot scoring to top-two scoring, a time-intensive, detailed scoring method. A sample of college students (n = 226) completed divergent thinking tasks and measures of personality and art expertise. Top-two scoring had larger effect sizes, but snapshot scoring performed well overall. Snapshot scoring thus appears promising as a quick and simple approach to assessing creativity.
Like parents, ombudsmen, and city council members, researchers are used to compromise. All assessment involves trade-offs between a method's evidence for validity and its cost. Many of the best assessment tools are costly in terms of administration time, expertise, technology, personnel-hours, and infrastructure. For this reason, many constructs have a range of available tools. A person's typical mood can be assessed with week-long experience-sampling methods or with brief self-report scales. Clinical symptoms can be assessed with face-to-face clinical interviews or with brief self-report screening scales. Even within a method, researchers can usually find a range of options. Personality researchers, for example, could choose the 300-item NEO-PI (Costa & McCrae, 1992), the 60-item FFI (Costa & McCrae, 1992), a 20-item IPIP scale (Donnellan, Oswald, Baird, & Lucas, 2006), or even one of two 10-item scales (Gosling et al., 2003 and Rammstedt and John, 2007).
The present research appraises a quick and simple method for assessing individual differences in creativity. Creativity research typically uses divergent thinking tasks to measure variation in creative abilities and potential (Kaufman et al., 2008, Plucker and Renzulli, 1999 and Runco, 2007), but the traditional methods of coding and scoring these tasks are costly in terms of time and personnel. As a result, creativity is hard to include as a secondary or exploratory construct in a research project. We compare this brief, simple method—known as snapshot scoring—to a more time-consuming, detailed method ( Silvia et al., 2008). If the brief method performs well, it may be a useful tool for researchers interested in assessing creativity.