Over the past couple of weeks, I’ve been sharing my experience of working with ImaginationLancaster to develop interview formats and associated tools, devised with the aim of better understanding the creative process of museum educators. The first interview focussed on the individual and their personal approach to creativity; the second interview, which I’ll explore in more detail here, was all about collaboration and group dynamics, and how we nurture ideas from that initial spark to a fully-fledged programme.
No museum educator is an island – not even the ones working as solo learning specialists in their organisations. All new ideas must get bounced around like pinballs, ricocheting off colleagues, funders, partners, artists, and stakeholders, before being launched into the world. This process can go two ways – for better or worse. Some ideas end up subject to ‘mission drift’ by trying to be too many things to too many people. The result? All a bit soggy and no real impact. However, when it goes well, the opposite occurs. Ideas can gain depth and richness through the input of other people’s perspectives. My best ideas have all started in pale sepia tones and only transitioned to full-blown technicolour after much discussion with many others.
In many ways, interview two was easier to conduct than interview one. In the first interview, museum educators were being asked to reflect on and articulate a process that is very personal and isn’t usually voiced. However, for the second interview, the same individuals were being asked to talk about their programmes and how they worked with others. These kind of questions are often asked of museum educators, whether for marketing, reporting or practice-sharing purposes, so the responses were, generally speaking, more readily available.
To start, I asked interviewees to bring to mind four examples of programming that they had a key role in shaping. I requested two examples of programmes that they felt to be ‘tried and tested’ (well established, popular and work well) and two examples that they considered ‘innovative’ (reflecting new approaches). One of the aims of my Churchill Fellowship was to identify innovations in learning programming, and I was curious to know how practitioners recognised this in their own work. In order to identify what made their innovative programmes distinctive or special, it was very useful to have the counterpoint of the ‘tried and tested’. Interestingly, some of the ‘tried and tested’ examples had started life as innovative programmes and, over time, had become a reliable staple as the museum educators gained confidence in, and familiarity with, how to deliver the programmes successfully. At this stage in the interview, I just wanted a nuts-and-bolts summary of each example, and asked interviewees not to offer any analysis of the creative aspects, because that bit was going to come later.
Next, I introduced the Project Mapping tool. This tool was a modified version of something that came from a previous piece of work with ImaginationLancaster. In 2015, I commissioned them to run a half-day workshop with my then team, Schools, Families and Young People (SFYP) in the Learning Department at the V&A. During our discussions around working practices and how we developed new ideas and programmes, one of the workshop facilitators observed two recurring dynamics – high vs. low risk, and reactive vs. proactive. Inspired by this, we marked out a matrix on a large sheet of paper, using these dynamics as X and Y axes, and then positioned our programmes in the relevant quadrants. It turns out that we perceive our work as being higher risk and more proactive than I realised, which was a reassuring discovery.
When I was working with ImaginationLancaster the following year, we felt there was potential to use the matrix format again. I was very keen to keep risk on one of the axes, because it comes up so often when talking about innovation, creativity and change. I also wanted to have one consistent measure so that I could collect several different responses to risk in relation to programming. The X axis we kept blank to see what other dynamics would come up. We also wanted the matrix to have a degree of personalisation, so that the mapping would better reflect the circumstances of that particular organisation.
When the Project Mapping Tool was introduced, the first task was to define what high and low risk meant to the interviewee, to make sure we were both talking about the same thing, and then to label the second axis. Once that was done, we numbered their four examples (1 & 2 = tried and tested, 3 & 4 = innovative) and then each one was positioned on the matrix. It was at this point that we reflected on the creative process and how it related to the chosen coordinates of each example. To read more about my findings from this part of the interview, please refer to my earlier post, Risky Business (12 Dec 2016).
For the final part of the interview, I asked interviewees to select one each of their ‘tried and tested’ and ‘innovative’ examples, and then talk me through how they were developed: What was the original idea? What steps did they go through? Who did they work with? How did they work together? What was the result? For this, we used an existing ImaginationLancaster tool, hexagons, that had been developed as part of a previous project. As the name suggests, hexagons are hexagonal (natch) discs that have a simple linking mechanism around the edges so you can build up a interconnecting map of thoughts and ideas. They were created as an alternative to the ubiquitous post-it notes, and are useful when exploring topics that have an element of progression or development over time. We used colour to differentiate each example – blue hexagons for the ‘tried and tested’; and red for the ‘innovative’.
This task generated an interesting range of responses. For two of the seven interviewees, after mapping their ‘innovative’ example, they declined the request to map their ‘tried and tested’, because they felt there wouldn’t be much difference between the two. For others, the process for each was very different. When this was the case, I asked interviewees to arrange their hexagons in a way that would reflect this (ie – not just a long snake of one hexagon after the other, but a more complex, interlocking arrangement). A particular characteristic of the ‘innovative’ examples was the necessity to keep looping back to earlier stages in the process. As the new programme was finding its form: some components were presumed resolved, went pear-shaped, and then had to be returned to and reconsidered; some components were extended and further fleshed out, whereas others were dropped; and some components were revisited as the result of input from others or externally-imposed considerations. It seemed to be a process that required closer attention, and more frequent, complex decision-making than when interviewees were working on ‘tried and tested’ programmes. I should note that not all examples were originated by the interviewee; some had inherited programmes and couldn’t talk about the genesis of the idea. I was also warned – and rightly so – that memory is highly unreliable. Was I getting an accurate record of how these programmes were created, or a tidied up, selective, oft-repeated narrative version of events? Ideally, I would have preferred it to be the former, but I think it’s more realistic to assume I collected a fair bit of the latter too.
With the example(s) mapped out in front of us, I then asked the interviewee to annotate, using green hexagons to show moments of surprise or insights, and ‘node’ hexagons (with a white dot in the middle) to show trigger moments where something shifted – ie. innovation. The answers this task generated could be loosely categorised as individual or interpersonal. For example, an individual surprise/insight was when the interviewee noticed a greater self-awareness around sensing boundaries and then pushing against them. In comparison, interpersonal surprises/insights came about through working with audience focus groups, brainstorming with the project team, and – shock horror – having a huge amount of fun and laughing like drains when testing beta versions of new programming with colleagues.
When conducting the interviews, I was always trying to delve deeper into the interviewees’ responses. Like that irritating child who keeps asking ‘why?’, I didn’t want to accept face-value answers and I would have missed a lot of great material if I hadn’t pushed for more. This rather dogged approach came from a place of genuine interest and curiosity. Having said that, I was also very aware that the interviewees were demonstrating an enormous amount of trust and generosity by allowing me to poke around in their heads and careers. It was important to me to be respectful of their answers, both in the moment, and how I later shared my findings through reports and blogposts.
I had always assumed that the sort of people who are drawn to museum and gallery education are also inherently interested in other people – a Venn diagram to illustrate this would be like an eclipse, with one sphere perfectly overlapping the one behind it. So much of what I learned over the course of my Churchill Fellowship confirms this assumption. It may seem self-indulgent, but I strongly believe that there is a huge amount to be gained from turning that interest in others back onto ourselves. These interview formats are one way of exploring our own creative processes, but I’m sure there are also plenty of others that we could be putting to good use.