By Angela Watson

Researchers often find as many questions as answers in their work. Hence, the common refrain “more research is needed” often found in the last paragraph of studies. However, it has occurred to me that the problem with research into the effects of the arts is not merely one of more research needed, but also the need for quality research.

Americans are prone to believe that with most things more is better, but when it comes to research, that is not always or even usually true. In my recent tours of art institutions across the country and in meetings with stakeholders, I have become increasingly aware of the proliferation of meaningless metrics in the art world.

I have been told with wide-eyed surety from arts institution representatives of the validity of so-and-so’s large scale survey results, all based on self-reports with a who-knows-what kind of response rate (which can grossly bias results). While I generally believe that more information is better, poorly designed research can cause more problems than it solves because it provides us with shoddy information in an arena with little information to begin with. This is because those who “design” the studies don’t really know what they are doing or how to do it properly. Further, they may be incentivized through mission or money to find a particular result. This could lead to dishonest outcomes or, and probably more often the case, well-meaning studies with poor design that then mislead unsuspecting stakeholders.

As a basis for my concerns, I will relay an experience I had on a recent arts institution visit. I was talking with a stakeholder about research and funders that often want XYZ intervention measured as a way to account for the money spent. She showed me a survey that was currently being given to young students after an arts related intervention. There were 18 questions covering 7 different constructs or ideas. When I raised my eyebrows, she smiled and said, “I know, but the funder wanted all of these measured, so that is what we had to do.” It is not possible to effectively answer 7 different constructs with 18 questions, and if we believe whatever answer this type of research spits out, then we are foolish. This is a perfect example of the meaningless metrics of which I speak.

Further, in speaking with a private research consultant about this issue, she said that a lot of this type of research is a racket. The funders want XYZ program measured. The stakeholder says okay, designs an ill-prepared and poorly designed study, implements it, and reports whatever findings back to the funder, all the while knowing that none of it is worth the paper it is written on.

At another recent arts event, I was speaking to an arts institution stakeholder about a program they were implementing. I suggested it should be studied. She quipped back, “Not everything has to be measured.” As a researcher, I was taken a little aback, “of course everything that matters should be measured” I thought to myself. But, upon reflection, she was right. If we can’t measure it well, if we don’t have the time or the money to design a study with a modicum of care and integrity, then we shouldn’t even try to measure it.

In our rush to study and our desire to believe, we must not sully the waters with a mirk of trashy research. Some of these arts interventions are difficult to measure, and more research IS needed. But, that research needs to be high quality, otherwise all concerned are better to save their time and money to spend on something that has a chance of helping. And, don’t be fooled into thinking that meaningless metrics don’t matter. They do. People make decisions, million dollar decisions that affect real people, based on these numbers.


 Angela Watson is a Distinguished Doctoral Fellow and Research Assistant in the Department of Education Reform at the University of Arkansas.