Few things make my museum-geek-heart flutter faster than a new edition of Trendswatch – not least because it only appears once a year and is always worth the wait. Produced by the Center for the Future of Museums, Trendswatch is a free, downloadable report that identifies five key trends for the coming year, and how each trend relates to museum practice. This year, for the first time, there is an accompanying digital resource, with all sorts of shiny additional content. Perhaps not surprisingly, three of the five trends in 2017 reflect the grim state of contemporary society and the impact of global political forces, namely: declining empathy rates; the need for reform of the justice system and its relationship with civil rights; and the vast scale of mass migration, whether as a migrant or refugee, or by forced displacement. More optimistically, I was happy to see design thinking having a moment as one of the five trends, with the report acknowledging the benefits of learning through failure, and using prototyping and iteration to develop programmes. But I’ve saved my favourite one for last – ‘The Rise of the Intelligent Machine’ is a fascinating insight into the growth of AI (artificial intelligence) and how it relates to creativity.
Mentally, I still file AI somewhere alongside Back to the Future hoverboards and that Star Trek ‘beam me up, Scotty’ transporter – it makes for great telly, but it isn’t really something that will touch my life directly. However, unless a London bus gets me first, it seems highly likely that AI and increasingly adaptable algorithms will become a daily reality. Having said that, for all those early adopters out there – who chat away merrily with Siri on their phones and Alexa in their houses – the revolution has already arrived. Whereas I don’t even own a toaster.
A recent NESTA report, Creativity vs. Robots: the creative economy and the future of employment includes a prediction that 47% of US jobs that existed in 2010 are at high risk of computerisation. It also reaches the reassuring conclusion that “creativity is inversely related to computerisability” so we’re not facing redundancy quite yet. In identifying what makes a job creative, the NESTA report lists an interesting range of relevant skills and requirements: social intelligence; the ability to tackle highly interpretive tasks; being able to generate new ideas of value; and participating in a collective, collaborative effort to make something. You would think that the complexities of these interrelated dynamics would leave plenty of clear water between humans and computers when it comes to creativity, but that margin is narrowing – sort of.
The Trendswatch article includes images from the Next Rembrandt project – a canny piece of marketing that teamed up advertising agency, J Walter Thompson with ING Bank and Microsoft to create a computer-generated, 3D-printed ‘original’ work in the style of a Rembrandt painting. If I had been in charge, it would have been called Pretendbrandt, which is just one of many reasons why I was not in charge. There is a short film on the project website that tells the story of its creation (the film also includes profound insights from the sponsors, such as, “you could say we use technology and data in the way that Rembrandt used his paint and his brushes to create something new”). The statistics for this project are incredible – 346 Rembrandt paintings analysed, over 500 hours of rendering, and the final image is made up of a staggering 148m pixels. It’s 3D because the surface has been built up in layers, creating a height map that apes the rough, textured surface of oil paint. A combination of deep learning algorithms (no idea) and facial recognition techniques (I can guess) were used to create a very passable image that does what any good portrait should do – stare back at the viewer.
I’m equal parts impressed and unnerved by the Next Rembrandt. It’s a well-executed idea and makes me wonder what a whole museum of pretend paintings would look like – an uncanny valley where art historians would go for the equivalent of cheap carnival thrills perhaps? It also raises a lot of questions – Is there artistry in it or am I marvelling at a gimmick? Am I having the simulation of an experience by looking at the simulacrum of a painting? Just because we can, does that mean we should? In five years’ time, will we all be cranking out our own cut-price masterpieces, and what would that do to our understanding of the originals? For all the whizzy technology that has made this possible, I do take comfort from the thought that the most creative part of the whole process was when somebody came up with the idea to do it in the first place. An algorithm didn’t suggest the Next Rembrandt or set the parameters – people did.
Along similar lines, SONY CSL has been investing in AI and music. Flow Machines is ‘a system that learns musical styles from a huge database of songs’. It can take a piece of music, such as Beethoven’s Ode to Joy, and re-present it in different styles, such as the bossa nova or house music. It has also been used to create new songs, albeit with a spot of human help providing the arrangement and lyrics. ‘Mr Shadow’ and ‘Daddy’s Car’ are two examples of what Flow Machines is capable of, and they both feature on upcoming AI albums.
Another route that AI/music has taken is robotics. Shimon is a four-armed keyboardist that can improvise and respond in real time to actual human bandmates. The YouTube clip of Shimon in action is amazing – ‘he’ bops his head along to the beat and develops a wonderful, strange conversation as the improvisation progresses. At the same event (Moog Fest 2016), a drummer named Jason Barnes, whose lower right arm has been amputated, showed what his two-drumstick-wielding robotic arm was capable of – namely, drumming up to 20 beats a second, controlled by the muscles in Barnes’ bicep. This is technology I can get on board with – it combines the best of both worlds to make something that wouldn’t be possible if either humans or robots were left to their own devices. Unlike with the Next Rembrandt, I feel like I’m having a more authentic experience by listening to Shimon noodling away, but I couldn’t tell you why.
It will be interesting to see how artistic production is influenced by AI over the coming years. In the mid-20th century, minimalists took inspiration from the processes and materials of industrial manufacturing to kick against the gestural, expressive artforms that dominated at the time – I don’t see why AI can’t provide similar grist to the mill and up-end current practices and understandings of creativity and authenticity. And then, of course, there are the consequences for museums – how we collect it, display it, and interpret it. The use of AI to support personalised learning has fantastic potential for museum education. Trendswatch quotes the macho-sounding ‘Educational Dominance program’ as an example of this technology in action – run by the Defence Advanced Research Project Agency, AI-powered ‘digital tutors’ speed up the training of Navy recruits. What if all museum interpretation could be personalised? Imagine entirely bespoke labels and text panels, generated via a portable device for each visitor. The content could be tailored to an individual’s level of experience and specific interests; tours could be created on the spot, built entirely around a visitor’s request; over multiple visits, visitors could be encouraged to try new routes, optimising exposure to objects that were previously missed, or recently put on display. We’ll just have to wait and see.