Fresh ideas from museums around the globe in your inbox each week
Aziz Isham
Executive Producer,
BRIC TV
As algorithms and artificial intelligences are becoming more ubiquitous, they carry very real dangers to how we process information. Already, there is a trend to ‘outsource’ decisions that are unjust or that increase inequality to algorithms and systems. Combined with the way that AI systems amplify intrinsic bias, we are entering a moment in history where decades of progressive work aimed at reducing the impact of prejudice in every day life is coming under fire.
This presentation investigates how systems are used to ‘outsource evil’, how the systems themselves are becoming increasingly biased, and how socially aware filmmaking can counteract these negative trends.
—
So this, as you might tell from the opening slides, is not the most positive of presentations. It’s amazing how solutions-oriented most of the work is. Honestly, I want to talk a little bit about a problem that we have. Not necessarily one that we have solved.
This is where I work. It’s an art and media house in downtown Brooklyn, it’s called BRIC. BRIC is pretty unique. We spend a lot of time working on diversity. We spend a lot of time thinking and researching about these issues. Over 90% of our programming is free and most of it takes place outside of our building. It takes place in the libraries and the parks and all around, especially, the Borough of Brooklyn.
But regardless of where we are, we are constantly grappling with technology and we are constantly realising that there is this diversity problem with technology that kind of is bigger than us. We’re inviting, all of us are inviting technology into our sacred spaces. I love this slide, because that is literally what this is. It’s a massive super computer inside a 17th century church. And that’s really cool. For the most part, we’re looking at these opportunities, and I know that we are at BRIC, and thinking about how we can use them to increase access.
But when we invite all of this technology … And I’m really speaking specifically about narrow AI, automatic curation, the kind of big data that is increasingly being useful and used. When we invite this software into our sacred spaces, into our museums, into our institutions, as well as obviously into our pockets and our purses and our bodies and our minds, we have to be really careful.
On the one hand, it can be an amazing, transformative experience. It can allow us to access collections that we might not otherwise be able to access in new and interesting ways. It’s also … It can kind of see the future for us. When I type in “museum” into google, it kind of already knows where I am. It gives me what it thinks are the three museums that are either closest to me or where I might likely find myself. And obviously that’s happening in our Facebook feeds and every time we use google, Twitter. These are happening on an institutional level. They are obviously also happening on a personal level every time we go to research a new project, we go to curate a new project, we go to create a new exhibition. And this is obviously a totally random list, but there it is.
The problem of course is that all of these software programmes are fascists. And I’m not using that in a metaphorical sense. That’s why I put a definition on the word. We’re talking about movements that are nationalistic and racist. That stand for centralised hypocrisy and that are really based on economic and social regimentation. And this kind of goes against a lot of our institutions, but, unfortunately, this is the problem.
So let’s look at that … Let’s unpack that definition a little bit. Are artificial intelligence, are these automatic sorting mechanisms, are algorithmic curation, are they racist and nationalistic? This is a researcher at MIT, Joy Buolamwini, who works on facial recognition software, and she needs to wear a white mask in order for the software to help recognise her.
Maybe some of you have seen a viral video that went up on Twitter about a soap dispenser that only disperses soap to white hands. This is a problem. You can imagine, the next time you are creating an exhibition … This is going to happen where someone decide, “Let’s integrate facial recognition software,” and they don’t think that entirely through.
They’re sexist too. There’s a lot of sexism that’s been burned into the technology because of the data sets that all of this technology is based on. This is a tool that Microsoft developed that on face value could be really amazing. Imagine just giving a massive amount of images to a computer and saying, “Hey, tag all of this for me, categorise it.” But a few months ago, a paper came out where they looked at some of these data sets and they realised that these automatic tagging systems were actually reinforcing gender bias. In this case, we can see that someone was mistagged because the gentleman in this picture appears in a kitchen.
And you can do this with any combination of prejudice and technology. You could say, “Hey, chatbots would be great. Let’s think about how we work with those, but then let’s combine them with sexism.” This is a satirical project, but I think a telling one, by Julia Gillgren in Gothenberg who developed a mansplaining chatbot. It’s cool, it works. You should try it out.
On the non-satirical end of the spectrum, I’m sure many of you saw this a few months ago where people were using Facebook algorithms and Facebook advertising basically to target anti-Semites. So we all use Facebook advertising, I’m sure, and there’s good things about it, but, you know, be careful.
Going back to our definition of Facebook, are they autocratic, regimented and inflexible? I mean, yeah, that’s why we love them, right? When we go to the airport, when we came here maybe, you don’t see fights in the airport anymore. When I was a kid, you used to see that all the time. Because you go to the counter and you’d ask the person, “I’m here a little early, can you change my seat? Can you put me on an earlier flight?” and the response is, “I’m sorry, the system won’t let me do that,” and that’s it, the conversation’s over.
We’ve been doing this, we’ve been training and really relying on the inflexibility of systems for a really long time. Unfortunately, that’s really changing who the systems are and who we are. To take the really big picture look at this, we have averted the end of the world on multiple occasions by not doing that.
This is a guy called Stanislav Petrov, who died recently. In 1983, he was manning an advanced warning centre in the Soviet Union and the satellite warning said there were five nuclear missiles heading towards Soviet space. He looked at the manual, it says, “OK, launch all your nuclear missiles and then call Moscow and tell them to do the same thing.” That is the official response to that. He did call Moscow, he did not launch the missiles. He actually pretended that there was a telephone connection and he was having a lot of problems. He dragged that on for almost 20 minutes. It must have been an incredibly long 20 minutes. At which point, the satellite systems realised those weren’t nuclear missiles, they were a cloud burst. And because of that, he averted … Human flexibility averted a thermo-nuclear war. It happened once before in the Cuban Missile Crisis too, similar story.
And do these software systems support severe economic and social regimentation? That’s kind of the big problem. That’s where a lot of this issue comes from, the broadband gap. Where I work in downtown Brooklyn, we’re at about 95% broadband connectivity in the neighbourhoods around BRIC. In the other side of town, in Brownsville, that drops down to 60% connectivity. And that’s just within the Borough of Brooklyn. This is obviously throughout the entire country and world. This is the case and it’s getting worse. That means that not only are the affluent over-represented, but they’re also creating much, much more data every day, which is training the systems, like Facebook, like Google, that we mentioned before. So this is really one of the big reasons why this continues to be a problem.
Are the final kind of [unintelligible 00:10:24] from the definition, is there are a forcible suppression of opposition? No, they haven’t had to yet, but that’s also part of it. Maybe they won’t have to, because they’re already … These systems are changing who we are and they’re really impacting us. You know, this is … An AI researcher at the University of Utah, Suresh Venkatasubramanian, says that the worst thing that can happen is that things will change and we won’t realise it.
So we’re going to do a quick exercise. It’s like a DIY Google Image search. I want everyone to close your eyes and I’m going to say fist, bring up an image. You can open your eyes now. This is what Google tells us. This is the front page of the Google search results. Notice something in common. I don’t know, maybe this is the image that you thought of and that’s fine. But there are other images too and maybe some of them are not safe for work and that’s OK too. But when we only have … When we only have one right answer and we start training ourselves on those search results, we do lose some individual histories.
So we have been feeding these programmes on a steady diet of prejudice and we have expected them to miraculously cleanse themselves and deliver us back some sort of objective truth that we can then use to kind of shape and share our role as teachers and learners and explorers.
OK, one more example. This one’s from China. Some researchers last year developed an AI that, with 89% accuracy, could look in somebody’s face and determine whether they were a criminal or not. All they picked was the face and it was a big success. It was a big success, they were very happy, pre‑crime here we come. The problem was some American researchers then went and looked back at the dataset and what the AI really determined was that they were able to look for facial irregularities among these pictures and, based on those, people with facial irregularities in China were more likely to get pulled over by the cops. So what they were really measuring was how likely you were, based on what you looked like, to get pulled over by the police.
[Break in audio]
What if it’s 95%? What if it’s 98%? When do we … You know, what happens? So a lot of this is rolled into the idea of data colonialism and that’s an umbrella term, but it’s a valuable one. Obviously, we’re not going to go through this whole thing, but I recommend looking at Daniel Raven’s work out of Harvard.
I do want to point out this idea of data agency. It’s kind of that over there. Because we all use data agency a lot in our practices, I’m sure, but we have to make sure we are using that with the participation of … For both personally identifiable information as well as demographically identifiable information. That we are making sure that the people whose data we are sourcing are complicit and are agreeing to allow us to use that data. It’s a really important note, I think.
And let’s think about how all of this data is influencing curatorial practice. Not just curatorial practice. Obviously, this is probably a long way away. Maybe not so long, maybe forever, who knows? But at some point, we might also have to be thinking about not just the curation of information as we present it and consume it, but also, if it comes to this, the creative side of the practice too. At some point, are we going to be looking at software to create music, environment and images and all of that?
So this is really about, when we invite automation into our creative spaces, doing so responsibly. I don’t think we have the answer at BRIC. But one of the easiest things to do, I think, is to diversity your staff, diversify your experiences. It’s also to make sure that the stories we tell … And I think a lot of the work that’s in the room on the left has a lot of really great ideas behind this and really telling stories about individuals and going out in the community and helping community members tell their stories is a really big part of the work that we can do. So were going to watch a really quick video and then that’s it. Do we have the video?
Learn more about museums and artificial intelligence in this article.
Fresh ideas from museums around the globe in your inbox each week