Fresh ideas from museums around the globe in your inbox each week
According to Livdeo, a French tech company that provides inclusive digital solutions for cultural institutions, the digital collections should be accessible to everyone, regardless of their cultures, language and disabilities. Interacting with digital collections through image recognition, multilingual, accessible content is mandatory.
Digital collections, they say, can bring significant efficiencies and opportunities to museums’ efforts to engage with their audiences.
Here MuseumNext talks to Livdeo CEO Ciprian Melian, who will give a presentation at our Digital Collections Summit today (Tuesday 5 October).
At Livdeo we are committed to providing technology that supports creating and distributing inclusive visit scenarios to visitors’ devices and reducing the constraints visitors may face during the visitor experience.
When we consider the visitor experience, we may identify particular levels of constraints that our GEED technology tries to address.
If we look at the vast majority of visit companions available today in the museums. We observe two categories: physical, on-loan audio guides or multimedia guides and visit apps on visitors’ devices.
The pandemic drastically changed visitors’ interest in on-loan audio guides, for good reasons. Moreover, the maintenance of those devices brought new constraints from a safety perspective.
The example of the Waterloo Memorial in Belgium is an interesting case study. The Memorial faced different limitations with their 300+ on-loan audio guides: The cumbersome content update procedures for new language audio materials and the issues with the audio synchronisation through infrared technology.
On the other hand, the visit apps became more and more available on cultural institutions as downloadable tools for visitors’ devices.
Although they provide a great user experience, they induced new constraints for institutions and visitors:
These questions were the starting point of Livdeo’s work, back in 2015, on GEED Box Hardware on the one hand and the visit apps as Web Apps on the other hand.
We considered these new constraints as “technological impairments” that we addressed by providing institutions with on-site distributed specialised GEED hardware supporting the local download of the visit web apps and associated multimedia material, without any limitations regarding the volumes of data or simultaneous users.
Since the visitors are all different, we wanted to provide multiple lenses on the visitor experiences through GEED visit apps. Our solution provides the frameworks to create in full autonomy various levels of storytelling for visit scenarios in sign language, easy to understand audio-description and standard terminology. The organisations can create several layers of storytelling for different target groups: adults, families, children, experts.
GEED CMS provides the institution with various automation tools for creating multilingual narratives through automated/assisted translation in 24+ languages and audio storytelling generation with multilingual natural voices with or without background music. (GEED translation and natural voice generation example.)
“Different lenses for different audiences”
An engaging experience is the one that fits the visitors’ preferences, language, comprehension levels or impairment, provides a natural interaction with the exhibition and integrates different rewarding mechanics for visitors.
Natural interaction
Camera recognition on the visit web app for thousands of artworks and objects.
When we analyse the on-site visit sessions, we observe different explorations behaviours: some visitor groups prefer following guided tours, other groups prefer exploring the exhibited objects in a free navigation form, while others prefer mixed tours, with a back and forth between guided and free exploration.
The visitor app needs to consider all these behaviours and support each visitor requirement based on its visit interaction preferences. The camera-based artworks recognition is one of the natural ways to engage with nearby objects. Therefore, to support a consistent user experience, it is important to let the visitor scan any artwork in the exhibition, and not only a selection that may be available on guided tours.
GEED CMS provides the turn-key solution for the image recognition training, starting with the image representation of the exhibitions’ artworks. The Visit App uses the train data to provide the visitors with the ability to scan and recognise any single artwork/object in the exhibition.
The camera-based natural “trigger’” opens new opportunities in terms of user experience during the visit, whether it is for accessing multilingual audio enabled wall labels or for deploying game mechanics who ask the young visitors to find artworks and objects in the exhibition. (Camera recognition in action.)
We may understand effortlessly the interest for the museum collection to be provided, at the collection level, with the AI recognition training data for all the object images, and therefore to simplify/accelerate the collection to visit app preparation process
We allow visitors to explore works of art in depth with multilingual audio tracks, and detailed gigapixel exploration.
Every object or artwork available in the museum collection has at least one image associated, representing the item.
The selection process of an artwork for visitor display or for the publication on the online collection might include an automation step to create video like animation with in-depth exploration of every object detail.
The GEED Deepscover automation provides this very features by using as an input the object description and image and then generating automated in-depth animations with zoom and panning actions, aligned with the multilingual audio tracks generated from the description text.
The automation workflow involved for one collection item:
original description -> Geed Translation -> Geed Natural Voice generation -> image analysis -> image in-depth animation creation and synchronization with audio tracks -> standalone embeddable animation player URL generation
(Examples of GEED Deepscover animations.)
What better way to engage visitors with art than to allow them to interact and learn through play. The integration of GEED’s gamification modules brings a playful dimension to the discovery of the collections. Thanks to several types of games and a system of progression and rewards, visitors can discover the museums’ works in a very engaging way.
The visit continuum integrates the visitors’ rewards and are made available for sharing on social media from their personal online mementos.
The visitor journey brings new opportunities in terms of value assets creation. The GEED visit apps store anonymised behavioural information for every visit session and build a unique visit memento for each visitor. The unique visit album created becomes then a new engaging asset with the visitors. The visitors become the ambassadors of the museums when they choose to share their experience on social media.
The case study Your Feelings Welcome at the Museum of Contemporary Arts Australia is a great example of a marketing campaign using the GEED platform for user engagement. During the visit continuum, the visitors have the opportunity to interact with the artworks through the mca.art visitor app/platform and express their feelings through a series of Emojis. They may leave comments for each artwork and participate in a contest organised by the museum.
Every visitor receives a visit session memento with a video generated from their behavioural anonymised data captured during the visit.
Most of the time, information about collections is only available in the native language of the museum. If you do not speak the language of that country, you simply cannot understand. The use of automatic multilingual tools to provide a translation for each work in a collection is crucial. Having a digital collection in ten languages could address the problem of multilingualism. In addition, it is extremely beneficial to use text-to-speech to create an audio track for each work. The use of such innovative tools is therefore necessary to shorten the process.
GEED translation engine
GEED Natural Voice engine
Online collections need to be accessible, in multiple languages, with several descriptions for different audiences groups.
(GEED automatic translations/natural voice generation example.)
The current status is that online collections are available in one language and have a unique description level.
GEED is ready to augment museums collections with the automated tools: multilingual layers, descriptions transformations for different audiences, visual descriptions’ generation in natural language, image training for camera recognition (thousands of images)
The workflow for a successful collection augmentation:
“Empower the collection – multilingualism, description layers, accessible images”
“Display the collection outside the museum’s walls with Augmented Reality”
A museum’s collection of works should be presented in its entirety, not just the highlights. The raw material must be prepared to be used for each new exhibition or display, and museum teams need to be agile when it comes to creating new content.
The visitor or user can currently only expect a few interactions and engagements from digital collections. However, new tools exist that can make collections sing digitally, such as gigapixel artworks exploration that are linked to an audio track. The manipulation of 2D & 3D objects through AR experience, could also create a new type of collection discovery.
The content associated with the artworks in an online digital collection is not always ready to be integrated into a visitor application. This raw material must therefore be prepared for mediation, explanation and interpretation, in order to provide the most valuable information to users, whatever their level of understanding, language or disability, and whatever the work consulted. All of this leads to a real concern for museums, but one which technology can alleviate.
The “empowered” collections bring new opportunities in terms of content repurpose, education, multilingual SEO, collections exploration display the collections / exhibitions outside the museums walls.
Cultural institutions need to increase the path to monetisation.
New sustainable revenue streams are needed beyond the physical venues by targeting a global audience.
Augmented versions of existing and past exhibitions, already curated, are real assets. They should create sustainable revenue streams.
Keywords: “Automation for inclusive content creation”
The museums’ collection are described, most of the time in a single language, and one interpretation level. The process of curating an artwork for an exhibition involves therefore time, teams and money to create multilingual narrative, accessible layers and audio descriptions.
Empowering the whole collections with multilingual content, multiple descriptions for different audiences, images recognition training, images captioning with automated tools minimises the effort, time and resources needed to get a selection of artworks ready for public display.
The “augmentation” of the collection provides direct benefits for published online collections and target a global audience.
That’s where Livdeo’s web-app enabled GEED solution comes in with its Automatic Translation and Text to Speech (TTS) modules.
The Waterloo Memorial had the opportunity to create many new written content (from 120 to 400 more) in their mother tongue within the GEED back-office. The content is then automatically translated into the ten languages they chose. The voice-generating system instantaneously produces audio content from the texts. Over 500 000 words were translated and turned into audio content.
The Fine Art Museum of Besançon, was able to provide translated and audio content in three languages for 1,200 artworks on display.
The MCA Sydney, uses GEED modules to manage exhibitions content in Chinese.
All the above examples show how these institutions have remained agile and saved a lot of time and money.
GEED Text to speech module allows for the automatic creation of every single artwork in a collection in a few seconds. Using a natural speech engine that is fully integrated into the GEED CMS, museum staff can change certain parameters to speed up or slow down the speed of speech. It’s possible to create dedicated content for audio description or easy-to-understand.
Each work of art can then have several layers of mediation to deliver relevant content to the understanding of each visitor.
As part of an on-site visit, users can interact and engage with the art based on the camera’s recognition of objects. In less than a second, the tour application delivers the right content, based on the user’s preferences. Each wall label, often displayed in only one language, can be augmented by all these new digital layers of storytelling.
The collections are the real assets. But the collections need to provide accessible interpretation layers. Museums may start to consider AI-based models to extract by example the visual descriptions from images or to generate simple to understand narratives. Here is an example of our AI model that creates multilingual visual descriptions from provided images in natural language, not only keywords extraction.
Let’s consider the artwork “By the Seashore” (CC0), painted by Auguste Renoir, from the MET Museum online collection. The following description is provided, in English only (text courtesy of MET Museum online collection, for demo purposes only)
« By the Seashore by Auguste Renoir. Renoir likely painted this work in his studio, posing his model in a wicker chair and relying on studies he had made on the Normandy coast to furnish the beach scene behind her. Stylistically, it reflects the impact of Renoir’s trip to Italy in 1881–82, which inspired him to unite the “grandeur and simplicity” he admired in Renaissance art with the luminosity of Impressionism. His new approach, which he called his “dry” manner, is evident in the sitter’s face, with its carefully drawn features and smooth handling of paint. The medley of quick strokes in the background, however, displays the freer technique of Renoir’s earlier years. »
We may see that this description targets an adult audience group. What about a fifth grade comprehension of this description?
“This painting is called “By the Seashore.”
It was painted by Renoir.
He painted it in his studio, not on the beach.
Renoir used his studies he had made on the Normandy coast to paint the beach scene behind the woman.
He also used the style he learned in Italy to make this painting.
Renoir’s new style is called “dry.” It means that he painted the woman’s face carefully, but the background with quick strokes of paint. “
French version:
Ce tableau s’appelle “Au bord de la mer”.
Il a été peint par Renoir.
Il l’a peint dans son studio, pas sur la plage.
Renoir a utilisé les études qu’il avait faites sur la côte normande pour peindre la scène de la plage derrière la femme.
Il a également utilisé le style qu’il a appris en Italie pour faire cette peinture.
Le nouveau style de Renoir est appelé “sec”. Cela signifie qu’il a peint le visage de la femme avec soin, mais l’arrière-plan avec des coups de pinceau rapides.
Spanish version:
Este cuadro se llama “A la orilla del mar”.
Fue pintado por Renoir.
Lo pintó en su estudio, no en la playa.
Renoir utilizó los estudios que había hecho en la costa de Normandía para pintar la escena de la playa detrás de la mujer.
También utilizó el estilo que aprendió en Italia para hacer este cuadro.
El nuevo estilo de Renoir se llama “seco”. Significa que pintó el rostro de la mujer con cuidado, pero el fondo con rápidas pinceladas de pintura.
Japanese version:
この絵は “海辺 “と呼ばれています。
ルノワールが描いたものです。
海岸ではなく、アトリエで描かれています。
ルノワールはノルマンディーの海岸で描いた習作を使って、女性の後ろにある海岸の風景を描いたのです。
また、イタリアで学んだスタイルを使ってこの絵を描いています。
ルノワールの新しいスタイルは “ドライ “と呼ばれています。これは、女性の顔は丁寧に描き、背景は素早いストロークで描いたということです。
Adrian is the Editor of MuseumNext and has 20 years’ experience as a journalist, half of which has been writing for the cultural sector.
Fresh ideas from museums around the globe in your inbox each week