Search Museum Next

Evaluating the Evaluators: Investigating Museum Survey and Research Practices

Brianna Casas, Katherine Duval, Nicolas Lord — Master of Science, Museums and Digital Culture Program, Pratt Institute

Introduction

In the spring of 2024, students taking the graduate class on Audience Research and Evaluation at Pratt Institute’s School of Information in New York City conducted a survey to find out more about museums’ practices when it comes to understanding their audiences and evaluating their experiences both onsite and online.

The question we wanted to answer was “What can we learn about how museums value and implement audience research and evaluation?” The following report synthesizes the responses we received and explores what those responses tell us about how research is integrated into museum operations, influences strategic decisions, and impacts visitor experiences. 

Methodology

During the first half of the semester, our team of five, in collaboration with Visiting Assistant Professor Allegra Burnette, focused on brainstorming and developing survey questions. Our aim was to better understand how museum professionals manage their survey and evaluation practices within their respective institutions. These brainstorming sessions resulted in a 42-question survey, which focused on the following key areas: museum teams and departments; the frequency of evaluations; evaluation goals; the relationship of evaluations to the institutional mission, vision, and values statements and DEIA initiatives; staff and stakeholder buy-in and attitudes; how findings are reported; and measurement of evaluation success. 

The survey was administered using the online service Jotform and ran from April 9 to May 3, 2024. The survey was promoted through the AAM (American Alliance of Museums) listserv, MCN (Museum Computer Network) Slack channel, LinkedIn posts, and individual contacts. We received responses from 54 institutions. These were primarily US-based organizations, with Canada, the UK, Europe, Australia, and the Middle East also represented. Of the museums that responded, more than half had a full-time staff of fewer than 50 people (not including security guards and volunteers); museums with 500 or more staff had the next largest number of respondents.

Who is doing this evaluative work? 

We found that most museums have internal staff conducting all the audience research, but some use a mix of external consultants and internal staff—or volunteers and graduate students. Only 11% completely outsource this work.

For those doing research, teams are generally staffed by 1–3 people (74%), with only 17% of respondents reporting that audience research is the primary responsibility of these team members. The rest have a mix of responsibilities, working on other kinds of projects or programs. Or the team is split between people dedicated to the evaluative work and others with different responsibilities, indicating that they likely work across museum areas.

If respondents reported using consultants to conduct audience research and evaluation, they noted that this was due to time constraints or to introduce fresh perspectives into the research. Others (16%) believe that senior leadership is more likely to pay attention to the outcome if it comes from an external source. Only 14% gave not having the skills in-house as a primary reason for using consultants.

Internally, the person or people responsible for conducting audience research are situated most often within the Education departments of their institutions, followed by the Visitor Experience and Marketing departments. Twelve percent are in a group or department called Audience Research and Evaluation (or similar) and 4% are in a digital-focused team (Digital Media or Digital Products).

What is the purpose of this work? 

The audience research and evaluation focus appears to be overall experience and satisfaction with specific exhibits most often studied within the context of the overall visit. Other metrics to measure success include actionable insights and feedback that lead to tangible improvements, as well as increased awareness, visitation, membership, donations, and retail sales. 

Survey participants reported that their research into visitors’ digital experiences focused mostly on the website. Specifically, it addresses how the website functions for visit planning, educational experiences, calendar and event information, ticket or membership sales, and overall user design.

When asked if their research and evaluation work is driven by the institution’s mission, vision, and strategy, 50% of respondents said it is somewhat driven by those factors, and 41% said it is very driven by the overall mission of the institution. No one said that it is not driven by those factors at all. However, when asked if audience research is a high priority for their organization, only a quarter of respondents perceived that it is, and half of the survey participants reported that they struggle to acquire the resources needed to conduct their research.

These findings suggest that institutions are interested in conducting audience research and feel that the mission of their organizations guides this research, but museum teams often have difficulty obtaining the necessary resources for this work when there are other priorities.

What type of audience research are museums doing? 

We asked framing questions regarding the type of research and evaluation work conducted by the respondents’ institutions. Our results show a fairly even split between audience evaluation and visitor research, with a smaller number conducting usability testing for specific products or services. In our survey, we offered a selection of methodologies, including in-person and online surveys, in-person and remote interviews, observational studies, focus groups, analytics reviews, heat maps, A/B testing, and usability testing. Online surveys were the most common methodology, followed by in-person surveys, interviews, and observational studies. Notably, the institutions used all the methodologies listed in some capacity, with some selected less often, particularly specialist digital product methodologies.

After establishing the types of evaluation work the institutions carry out, we asked questions about frequency. We found that 43% of museum respondents conduct evaluation work continuously throughout the year. When asked if their evaluation work coincides with exhibition schedules, 39% of respondents answered no, while 35% said it does sometimes but not always. 

Next, we sought to understand stakeholders’ presence in evaluation planning and frequency. A fairly even split of responses indicated that the evaluations are sometimes driven by stakeholders and sometimes not (46% and 41% respectively). Of the respondents who answered that their evaluations are sometimes driven by stakeholder requests (such as a specific question the museum director is seeking to answer), a small percentage acknowledged that stakeholder requests disrupted their regularly scheduled evaluation work, with an even split between the evaluations taking longer or shorter when influenced by stakeholders.

The top reasons evaluation projects were deemed unsuccessful were too few participants and difficulty reaching the target audience. Other reasons included not having clearly defined goals (14%), having resource issues (12%), or being unable to act on their findings (11%). Technical issues with the evaluation tools were experienced by 7% of respondents. Fourteen percent reported experiencing none of these issues.

When asked about the frequency of the evaluation work over the last three to five years, 44% of respondents mentioned conducting more evaluations; 24% said the frequency has not changed. Only 4% have decreased the amount of evaluation work they are doing, indicating that institutions continue to prioritize and value this work.

How are results shared? 

When asked how research findings are reported, most survey participants noted that they report findings internally only, most commonly by sharing them with the museum director and senior staff, stakeholders relevant to the research, the museum board, or the broader organization at an all-staff meeting or via an internal newsletter, message board or report.

Less than 25% of survey respondents said they maintain a consistent schedule for reporting findings. However, despite the majority of respondents not having a consistent schedule, a third of institutions responded that they provide internal reporting annually and 25% said they provide internal research reporting monthly.

When survey participants report their research findings publicly, they employ a variety of reporting mechanisms but use industry conferences and journals most often.

In addition to reporting research findings, most survey respondents said they collaborate with and discuss best research practices with institutions of a similar scale or with a similar mission. Thirty-three percent of participants maintain close relationships as part of their general practice, and 20% create these connections at conferences. 

How does this work fit into organizational goals and aspirations? 

Our survey highlights the close alignment between a museum’s evaluation efforts and its core values, and a strong commitment to Diversity, Equity, Inclusion, and Accessibility (DEIA) initiatives. Responses revealed that a significant majority of museums (91%) align their evaluation efforts with their mission, vision, and values statements. This finding emphasizes evaluations’ integral role in measuring program effectiveness and reflecting core institutional values. Additionally, 63% of participants reported having a dedicated DEIA staff or policies, illustrating a strong commitment to these areas with room for further growth, with 9% reporting they’re actively working on it.

Staff training within these institutions appears to focus on three key areas: accessibility (29%), diversity, equity, and inclusion (27%), and community engagement (24%). This reflects the museum staff’s intentions of working toward an accessible, inclusive, and community-oriented work environment, which in theory would guide evaluation strategies in the same direction. Despite these intentions, a notable gap exists between the goal of staff training and the application of that training within survey practices. Of the museums that responded, 38% reported that they do not evaluate how accessible their surveys are for diverse audiences, including users with disabilities. This discrepancy reveals a crucial area for development, highlighting the need to bridge the gap between education and practice to increase inclusivity and ensure that all community voices are heard during the evaluation process.

 

 

What does this survey reveal about trends and areas of future research? 

The findings of this survey suggest that museums and cultural institutions are aware of the potential value of audience research but many organizations are in the nascent stage of developing research programs. Institutions often lack the time and resources for robust audience research, and employees on research teams likely have multiple responsibilities. Still, respondents generally feel that their organizations’ missions support this work and are interested in continuing to develop their practices. Although this survey is only a snapshot of current trends in the field, it indicates the need for further research into how institutions navigate audience understanding through research and evaluation. Supplementing the online survey with interviews with individual museums on their practices would provide more detail, anecdotal evidence, and qualitative data.

What have we learned from this process? 

As student researchers conducting this survey, we found the process illuminating in several areas. In future research on this topic, we would be more strategic in selecting survey participants based on the location and size of the institution, annual visitation, and total employees to make more direct comparisons between similar institutions. We were grateful for the responses to and feedback on the survey, and we are excited to see how audience research trends and collaboration between organizations continue to develop.

Audience research and evaluation is an important tool to the overall functioning of a museum, enabling it to create programming that resonates with its audiences. Institutions recognize this and are engaging with this work, but experience time and resource constraints. Trends in reporting, collaboration, and the blending of internal researchers and consultants suggest that organizations are seeking innovative ways to understand and connect with their audiences.

We would like to thank the organizations that participated in the survey. In addition to the authors, the survey team included fellow students Tereza Chanaki and Sohyun Park and Visiting Assistant Professor Allegra Burnette. The class was co-taught by Visiting Assistant Professor Jamie Lawyer.

Subscribe to the latest museum thinking

Fresh ideas from museums around the globe in your inbox each week