Due to scheduled maintenance, the National Library’s online services will be unavailable between 8pm on Saturday 7 December and 11am on Sunday 8 December (AEDT). Find out more.
22 08 18 Science Week 2
*Speakers: Ben (B), Kathy Red (K)
*Audience: (A)
*Location:
*Date: 18/8/22
B: As we begin I’d like to acknowledge Australia’s first nations peoples, the first Australians as the traditional owners and custodians of this land and give my respects to elders past and present and through them to all Australian Aboriginal and Torres Strait Islander people.
Thank you for attending this event either in person or online coming to you from the National Library building on beautiful Ngunnawal and Ngambri country.
Welcome to this event celebrating National Science Week 2022. Speech recognition technology is the second in a series, Cybernetic Thinking for a New World co-presented by the National library and the ANU School of Cybernetics. Cybernetics first found form in the 1940s and 1950s as a response to the rapid expansions in computing technology following the second world war, fusing maths, engineering and philosophy with biology, psychology, anthropology and many other fields.
From its inception cybernetics was a generative intellectual wellspring shaping everything from AI to critical systems theory, computer-driven art and music, design thinking and of course the internet. The idea of cybernetics, of steering a technological object and the idea of humans in the loop and of the environment in that same loop is just as relevant today as it was 70 years ago, providing us with hopeful and actionable ways to imagine our futures.
Tonight Kathy Reid will examine speech recognition technology through a cybernetic lens, a communication tool that translates spoken words into text using voice commands instead of typing, speech recognition technology is increasingly serving diverse uses in complex settings. People’s ability to use their chosen language is critical for human dignity and the general wellbeing of society. Language facilitates meaningful interactions with one another, enabling expression and the transmission of history and traditions, and language allows us to construct our future. But what if the technology we used to construct this future doesn’t acknowledge your language?
Kathy Reid is a PhD candidate at the ANU School of Cybernetics and works at the intersection of open source emerging technologies and technical communities. Over the last 20 years Kathy has held several senior leadership roles in the technology industry and then decided she would return to study, completing a Master of Cybernetics at the School of Cybernetics. Kathy is currently investigating digital voice assistance and what we can do to ensure inclusion and representation is achieved in order to build more equitable speech recognition models. So please join me in welcoming Kathy Reid.
Applause
K: Thank you so much, Ben, for that lovely introduction, I really do appreciate it.
So as Ben mentioned, tonight I'm going to be taking a cybernetic lens to speech recognition technology but before I get to the talk proper there’s three things that I would like to do. I’d like to acknowledge country, I’d like to tell you a little bit about myself and I’d like to tell you what language I’m going to give the talk in tonight.
So first of all I’d like to acknowledge country. We’re here on the unceded lands of the Ngunnawal and Ngambri peoples and I pay my respects to elders past and present and I’d like to acknowledge that the Ngunnawal language previously spoken on these lands lies sleeping.
I’d also like to tell you a little bit about myself. Like many Australians I’m a child of two countries. My mother’s family are from regional Victoria, from the Otways, [Eastern Ma] 3:44 country but my father’s family are from northern Northumbria, Geordie country so if you’ve ever seen Billy Elliot or Vera or for those of you who like reality television, Geordie Shore – I never thought I’d reference Geordie Shore in a National Science Week presentation but here we are. People who speak with a Geordie Northumbrian accent have unusual sounds so we might say something like futher for father or we might say gununheem for going home or where is the bonnie wee ban to say that’s a delightful child.
So the sounds of Northumbrian are incredibly different to the way that my mum spoke and it was from a young age that I got really interested in languages and their variations. That takes me to the third point that I want to make tonight. I’ll be giving the talk tonight in English and you might say well that’s bleeding obvious, Kathy, isn’t it? That’s exactly why I want to talk about it.
We take so much of our spoken language for granted. English is Australia’s only official language even though there are 120 indigenous languages still spoken here. Did you know in Australia according to last year’s census over five-and-a-half million people speak a language other than English at home? We have Gujerati, Mandarin, Greek, Vietnamese. Twenty-two per cent, more than one in five people in Australia, speak a language other than English at home.
So we take for granted the language that we hear every day and I want us to think about this provocation as I go through the talk, what might it be like to use technology that doesn’t recognise your language, that wasn’t built for you, that doesn’t recognise the sound of your voice, that doesn’t represent you? On that note I’m going to go to the talk proper.
Missed a slide there, sorry. Tonight I’m going to take a cybernetics lens to speech recognition technology and you might be asking the question, what on earth is cybernetics? Well cybernetics is all about systems, their connections, their components and pictured here is Norbert Wiener. He is considered the father of cybernetics with one of his cybernetic systems, Palomilla, the moth – there’s a beautiful story behind Palomilla but I’m going to leave that for you as homework.
Cybernetics allows us to understand the histories of systems, how they’ve been shaped, how they develop, what factors have influenced their evolution? Cybernetics gives us the tools to think about systems today, analyse them, pull them apart, decompose them, understand what makes them tick. But crucially cybernetics also lets us think about the histories – sorry, the futures of systems, the possible futures that they might have, how we might be able to shape or change or intervene in systems to make them work better for everyone. It’s through these three lenses, the history, the present and the future that I’m going to take us through speech recognition.
So without further ado a brief history of speech recognition. So to understand the technology of today we’re going to jump back in time. Now there are many places I could start the history of speech recognition. I could start the history of speech recognition here, somewhere between the 4th century and the 6th century with Pāṇini in what is present day India. Pāṇini was a scholar of Sanskrit language and he’s considered the father of linguistics and the reason for that is that he identified the building blocks of sound. So if you speak Hindi or Gujerati or any other language that has inherited from Sanskrit you’ll be using technology that was conceived of in the 4th to 6th century.
Or I could choose to start the history here. This chap, Thomas – sorry, I get my Thomases mixed up – this is Thomas Sheridan. He was an educator in the 1700s, he was Irish and he saw rhetoric and public speaking as a way to educate people. But crucially Thomas Sheridan wanted to standardise the way that people spoke English and if you think about the way that we speak English today there are many Englishes in the world, many accents of English. You can only imagine how many there were in the 1700s. He wanted to standardise that to have one best way of speaking English. As we get to dig deeper into speech technology a little bit later we’ll see why Thomas Sheridan’s push, even though it didn’t succeed, has an echo today in speech recognition technology.
Or I could start the story here with my other Thomas. This is the well known inventor, Thomas Edison. This is 1878 in Washington DC. This is one of the very first recording devices. This is the tin foil phonograph and you might recognise it. It had a tin foil drum and a stylus was used to etch a recording into that, sound recording. We won’t have time to play the sound bite but what you’ll hear is him reciting Mary Had a Lamb in a very scratchy voice. So this was one of the very first recording devices. It’s the precursor to microphones, it’s the precursor to dictation machines and to record players for anyone who has an excellent vinyl collection.
But I’m not going to start the story of speech recognition with Pāṇini, with Thomas Sheridan or with Thomas Edison, instead our story starts in August 1955 – sorry, our story starts in August 1955 at Dartmouth in New Hampshire. The backdrop for this story, 10 years after the end of world war two, a decade into a cold war with the USSR we have here Claude Shannon and if anyone does computer networking we still use Claude Shannon’s information theory in computer networking. We have Marvin Minsky there up the back, founder of the MIT Artificial Intelligence Lab. For anyone who’s into mainframes we have Nathaniel Rochester who designed a lot of the IBM mainframes. We have John McCarthy if there are any LISP programmers in the house, John McCarthy designed the LISP programming language. So we had some minor players in the history of computer science here.
They were getting together at Dartmouth to figure out could computers, this new technology, computers, be able to make – be able to think like people? If we think about why they’re thinking that way then we need to start thinking about spoken language. Humans are the only species on the planet who have complex spoken language. Spoken language is incredibly powerful, it allows us to communicate powerful ideas, complex ideas, world-changing ideas and I think that’s a beautiful photograph to sum up that. Of course that’s Julia Gillard’s misogyny speech in 2012 and a big shoutout to my School of Cybernetics colleague, Andrew Meares, for the photo there.
So spoken language allows us to form complex sentences - I might need two cups of coffee first but usually it allows us to form complex sentences – and it allows us to use a very large vocabulary. Did you know that the English language has more than a million words? A million words. And spoken language – and a million words in vocabulary is what allows us to communicate and see those complex ideas, see power and see change.
But to communicate you first have to understand the sounds of a language so most of us here will understand the sounds of the English language, A, E, I, O, U. I won’t go into the technical terms for those tonight but different languages have different sounds. Some languages have a trill R. Tom and I were joking about that before. So different languages have different sounds and then you have to understand how those sounds combine to form words. So combine has several sounds in it but they all combine to give you the word combine and then words get combined into sentences and sentences have meaning.
So there’s a huge amount to learn if you are learning language, if you’re learning spoken language. If we cast our minds back to this crew in the cold war we can think about the situation they found themselves in, cold war with another super power. We don’t have many people who speak that language. Can we get computers to understand spoken language so that we can translate and have machines that are able to do translation for us, to understand speech and to do translation? That was exactly the declaration of the Dartmouth conference. An attempt will be made to find how to make machines use language, [form] 14:06 abstractions, concepts, solve the kinds of problems reserved for people. Sounds quite hefty, doesn’t it?
So how long did you – how long did they think this would take? Well they thought it would take a few weeks. It was a summer research project, they thought it would be a few months max, problem solved so here we are 70 years later and speech recognition is still not a solved problem.
So it took 70 years to get where we are from Dartmouth to today. I want to think a little bit about what that means cybernetically. Dartmouth put speech recognition on the map. Dartmouth made speech recognition important. It was recognised as a national capability. It became a capability that everybody wanted to have, everybody wanted computers to have. Then what happened was that it attracted funding so government funding. What we start to see here cybernetically are some of the forces that have shaped speech recognition. So we start to see some feedback loops. Dartmouth put speech recognition on the map, that attracted funding. We had commercial companies wanting to do research because they could get funding to do that research and suddenly we have momentum. So we can start to see how these forces have shaped speech recognition.
So speech recognition began to progress based off this funding and based off this interest but that investment was short-lived and as we’ll see speech recognition came up against [myriad] 15:54 challenges. How could it work accurately? How could it work quickly? How could it work with a large vocabulary? A million words. How might it recognise all the different voices that people have?
So let’s continue the story. This is the computer, not the chap behind it – this is the Audrey, the automatic digit recogniser. Since this, circa 1952, a couple of years before the Macy conference and the gentleman pictured there is John R Pearce. Now I don’t know if John R Pearce was the inventor of the Audrey but he worked at Bell Labs where the Audrey was developed so I think this is a good representation. The other thing that you’ll note is that the Audrey is about the size of a desk so not easily luggable, doesn’t fit into your phone – doesn’t fit into your pocket like a mobile phone.
Now Audrey was one of the first speech recognition products and it could recognise a whole 10 words, could recognise the digits zero to nine which for its time was pretty good. But even with the investment in speech recognition it took another 10 years ‘til early 1960s before the hardware scaled down. This is William C Dersch and what he’s showing there is a device called the shoebox so you see we’ve gone from a computer that's about the size of half a room to one that fits into a shoebox. Now again this only recognised one speaker and it again only recognised about 16 words so it recognised the digits nought through nine but it also had six words for things like total and multiply. How’m I doing on time? Might have time to listen to this.
So we’re going to hear a recording of the shoebox or a recording of William Dersch communicating with the shoebox.
Love the dickey bow tie, it’s gorgeous, isn’t it?
Plays audio
Three, four, five, seven, eight, minus seven, total.
So I think that’s enough of that ‘cause he’s about to put me to sleep, that’s for sure. So there’s a couple of things that you’ll notice as he’s speaking to the machine, right? So he’s enunciating his words, seven, total, and he’s speaking very slowly so if you were using the shoebox as an adding machine it probably wouldn’t be worth your while ‘cause it would be so slow to use. So I want you to keep this in mind as we look at some of the other pieces in the history of speech recognition.
So that was William Dersch and the shoebox. So the hardware wasn’t particularly powerful but it had scaled down so we’re starting to see some cybernetics here, the technology’s scaling in different ways, it’s evolving along different trajectories. So even though he had to speak very slowly, even though it was limited vocabulary, the fact that in 1962 we had a speech recognition device that fit into a shoebox and that recognised somebody’s voice is still pretty amazing.
If we jump forward about 15 years – sorry, folks – I’m not that good at driving PowerPoint. Here we are - if we jump forward about 15 years we get to this. This is Bruce Lowerre’s doctoral thesis project – and no, Alex, I’m not going to build one of these – but this is the Harpy. I don’t know where the name comes from, I’m not particularly fond of the name but this is the Harpy and where the shoebox could recognise 16 digits the Harpy could recognise 1,000 words so this massive, massive leap in the technology.
The downside of the Harpy, though, it needed to be run on one of these. Now if we have any IT history buffs in the audience you might recognise it, it’s a Digital Equipment Corporation PDP-10. It’s a mainframe computer, it’s the size of a room and that’s what was needed to run Harpy to recognise 1,000 words of speech. You might be thinking wow, all that hardware, 1,000 words of speech. It’d be really fast, wouldn’t it? No such luck. So the Harpy took 13 to 18 seconds to recognise one second of spoken audio. So can you imagine speak to a Lexar or speaking to Siri and it taking 13 to 18 seconds to recognise one second of speech? So to put that into perspective my talk will go for about 45 minutes to an hour tonight and to get Harpy to recognise that will take 13 to 18 hours. You’d give up on using the technology, wouldn’t you? You’d just be so frustrated with it. So even though we’ve scaled up the vocabulary the hardware hasn’t come down and the speed hasn’t come down so we’re still constrained a little bit by the technology.
Now off the top of my head Harpy was taught to recognise four people so we saw with the shoebox that was tuned to William’s voice, it could only recognise one person but with the Harpy we’re recognising four people. Now if you think there’s eight billion people in the world and we want to recognise all of them well four’s a good start from one but it’s still not near to eight billion. So [is the] 22:15 Harpy.
Now this is where speech recognition comes up against some roadblocks. About 10 years before the Harpy, about 1966 or so, there’d been some growing scepticism about speech recognition technology and all the funding that it had attracted since Dartmouth. A committee was put together headed by William Pearce – John R Pearce, sorry, the person in the Audrey photograph and basically this report said we’re really, really sceptical. We gave you all this money, there was all this funding in the 1950s and 1960s and you’ve given us a shoebox that can recognise 10 digits. Even 15 years later we can only recognise 1,000 words of the one million in the English language and it’s going to take a PDP-10 several hours to do it? So there was an incredible scepticism and we ended what’s called the artificial intelligence – the winter of artificial intelligence. It didn’t live up to its promise. So it hit a bit of a roadblock. Shall we continue and see how the story goes?
So his is one of my favourite photographs. So this machine was invented about the year I was born, this is the TANGORA. You can see the lovely 1980s beards and the lovely chunky – the chunky glasses that date this. Circa 1980 and there’s a couple of things you’ll note about the TANGORA. So the TANGORA worked on a PC so we didn’t have the PDP-10 of the Harpy, we didn’t have half a room of computer of the Audrey, slightly bigger than a shoebox but what the TANGORA could do, it could recognise 20,000 words, 20,000 words, 20 times jump from the Harpy just four years earlier. Downside was it could only recognise – sorry, downside was it – downside and upside, it could recognise anyone’s speech. Downside was, the person who was speaking to it had to spend dozens of hours training it. So it would recognise your voice but you had to train it first.
There’s also something else you should know about the TANGORA, it was one of the first devices to use a new type of algorithm for speech recognition, a statistical-based algorithm that used probabilities. This approach – I won’t go too much into the technical detail tonight but this approach is what allowed it to recognise the 20,000 words. But again we were constrained by hardware. The reason that it couldn’t recognise more than 20,000 words of speech was because it was a personal computer, it was still limited in the calculations that it could do.
So - sorry, I’m behind on my threads, bear with me – the one thing that the TANGORA did do was prove the statistical or prove the validity of the statistical approach and what came next in the history over the ‘80s and ‘90s leveraged the statistical approach so we start to see in the late 1980s and 1990s the emergence of consumer speech-recognition devices. So I’m sure some of the people in the room here would have used Dragon Naturally Speaking or Via Voice and they picked up where the TANGORA left off. With Dragon Naturally Speaking or with Via Voice they could recognise anyone’s speech as long as you had hours and hours to train it.
What we also started to see here with Via Voice and with Dragon Dictate was some specialisation. So by the end of the late 1990s these products were able to recognise up to about 100,000 words but they still had a constrained vocabulary and the way that they constrained that vocabulary in order to do statistical calculations was specialisation. So they specialised into areas like legal so you could get Dragon Dictate for Legal or you could get Dragon Dictate for Medical and so they were able to recognise words that fell into that domain, into that category and that was a way of restraining the vocabulary.
We can start to see here some of the forces that are shaping technology, we put on our cybernetics lenses again. So here the technology has gone from the lab to the real world, we’ve jumped that commercialisation barrier and so now we’re starting to get revenues, we’re starting to get some more investment in the technology. It’s adopted easily by the consumer market. Here we see it scaled to different languages so you could get Dragon Dictate for French, Dragon Dictate for German, Dragon Dictate for Japanese. Not quite 7,000 languages but a start from English.
So we start to see the technology scaling in different ways. There was a downside though to Dragon Dictate and Via Voice, they cost thousands and thousands of dollars so in the equivalent of today’s money a licence for Dragon Dictate would set you back about $12,000 Australian. Cheap, right? Not really. So here’s where we had another story for the history of speech recognition. Research universities didn’t want to pay $12,000, universities just don’t have $12,000 to pay for a piece of software so they started building their own and we have two key projects in the history of speech recognition that came out of universities in the 1990s and early 2000s. So Kaldi came out of Johns Hopkins and CMUSphinx funnily enough came out of CMU, out of Carnegie Mellon. These two pieces of software were the mainstay of speech recognition well into the 2010s and what we saw with these pieces of software was a massive scale to other languages. They were free, anyone could use them. You basically needed a PhD in linguistics to use them but anyone could use them. They scaled to other languages.
We have Kaldi for an incredible array of eastern European languages. Sphinx was available in several Asian languages and these two products are still in use today. But they kick-started speech recognition, they kick-started research in speech recognition again after the winter. So open source was another way in which we scaled this technology.
There's one more link in the historical chain that I want to talk about. Now anybody who’s a hardware enthusiast will recognise this straight away. This is one of the very first graphical processing units or GPUs. Do we have any gamers in the house? Ah ha, hello. So you all look a little bit younger or actually you all look a lot younger than I am but back in the ‘90s when I was playing games like Prince of Persia and Baldur’s Gate – I’m dating myself there a little bit – this is what made the graphics run so smoothly. So with advanced video games in the 1990s they needed to do heaps and heaps of calculations and the GPU was invented to help do those calculations and to give us better graphics and video games. If we remember back to the TANGORA, the TANGORA that was the first to use the statistical method of doing speech recognition suddenly we get GPUs and it’s like a kick-start for speech recognition, we’re able to do thousands, tens of thousands, hundreds of thousands of statistical computations. We can start to recognise many hundreds of thousands of words. It’s a very short jump from the GPU to the 2000s and the 2010s and where we are today.
Part B. So these changes, commercialisation, better hardware, smaller hardware, significant research capability have led us to where we are today with speech recognition. So anyone who’s familiar with the iPhone, you probably have Siri, you might talk to Siri. Siri was first put onto Apple iPhones about 2010. Those of you who use Android devices, you got an assistant in about 2016 and some of you might even have these devices in your home. Alexa was released in 2017 followed shortly by Google Home, Alexa Google Home and speech recognition technology is a key component of these devices. They’ve been massively adopted. In fact industry predictions show that by next year, 2023, there’s going to be eight billion voice assistants on the planet. That’s more than one for every single person and it’s just absolutely incredible.
But it’s not just in voice assistants that we’re using speech recognition technology and now I’m going to do a bit of a shoutout to the audience. Can anyone else tell me where we might find speech recognition technology today? Anyone?
A: Call centres.
K: Call centres, excellent. Any others? Watches, smart speakers. We even have a voice-activated microwave. I’m just very glad that it can’t talk back to me, that’s for sure. No, you can’t have this, you're too fat. But we have a voice-activated microwave. So speech recognition is a component in hundreds and hundreds of different technologies that are all scaling in different ways, that are in our homes, our workplaces, on our desks, in our pocket. So speech recognition 70 years ago now works quickly, now works for a much larger vocabulary, many more languages. Problem solved, right? Not so fast. The future’s already here but it’s just not evenly distributed. A quote from one of my favourite authors there, William Gibson. The future envisaged by Bruce Lowerre with the Harpy, William C Dersch with the shoebox, that future is here but that future doesn’t work well for everyone.
Now we put on our cybernetic lenses again and we can start to ask some questions and look at systems and components and connections, look at this problem from different angles and ask the right questions. I want to ask two questions here. Who doesn’t speech recognition work for? Importantly why?
So speech recognition doesn’t work well for many groups. It doesn’t work particularly well for women although that’s getting a lot better in the past couple of years. It doesn’t work particularly well for people who have – or people who have a nonbinary gender. It doesn’t necessarily work well for people who affirm their gender although we don’t have a lot of research on this at the moment. It doesn’t necessarily work for people who are elderly. Again there’s a lack of research in this area. That’s a massive gap because as census data shows we’re going to have about 1.9 million people who are over the age of 65 in Australia – that’s last year’s census data. Speech recognition isn’t going to work properly for one in six Australians so we still have some massive gaps here.
At the other end of the spectrum for people who are younger, anecdotally again, not a lot of research, anecdotally it doesn’t work as well for people who are young, our children. There’s a number of reasons for this. As we’re learning to speak, as we’re children, we have particular speech patterns and then as we get older our speech patterns change as we age, with normal bodily ageing. There are other groups that speech recognition technology doesn’t work well for.
So there are a number of groups who speak differently. I’m speaking with an Australian accent. It’s not quite an Australian accent but it’s an Australian accent and it sounds very different to a British accent or an American accent and there’s a number of things that make up an accent, sounds of the words, Australia. The words themselves, yeah, no, mate. There’s been a bingle at Broadie and the Western’s chokkas back to the servo, going to be late for bevvies at Tommo’s. Try saying that to an American.
So different people speak in different ways, use different words. We use different sounds, we use different sentence construction. So speech recognition doesn’t necessarily work for all the people who speak in different ways. Here I’m going to call back to Thomas Sheridan from the 1700s, the chap who wanted us all to speak English with the same accent. Now I’m very glad that he didn’t get his way and didn’t reduce the diversity of accents but it would have made the speech recognition problem a whole lot easier to solve today.
Interestingly there’s also a connection to justice. So this is research by Allison Koeneckea out of Stanford in 2020 where she compared the accuracy of speech recognition along racial lines. White Americans compared to black Americans. Black Americans speak into cloud-based speech recognition services are experiencing almost double the error rate of white Americans. Technology doesn’t just not work across gender lines, across socioeconomic lines, we also have racial lines here as well. So speech recognition technology now doesn’t work for everyone. Even if you speak with a privileged accent, Chief Minister Andrew Barr as he found out in August last year, speech recognition still doesn’t work perfectly for you. Of course everyone here will know the Canberra meme. Andrew Barr was at a press conference talking about COVID and he was congratulating all of the people of the ACT and he goes you know well done, Canberrans and yes, Canberrans, the meme was born.
It's not just accent that speech recognition doesn’t work well for. Remember at the start of my talk I said we only have speech recognition technology for about 200 of the world’s 7,000 languages? Speech recognition technology just isn’t available for those languages. I can talk to a microwave in English but I can’t do basic dictation or transcription in several thousand languages. The future’s already here but it’s not evenly distributed.
So now I want to talk a little bit about the why. Seventy years since the birth of speech recognition, why isn’t it solved? A large part of the reason is this, this is voice data, this is what we use to train speech recognition models. If you’ll remember the TANGORA that used statistical methods and you remember Kaldi and you remember Sphinx, all of these are trained on this type of data. What we can see there at the top is a representation, a wave form of an audio file and what we see at the bottom there is what we call the spectrogram. It’s the – what’s a good way to describe it? It’s a way to show how powerful the sound was at a particular frequency. Different people are going to speak this phrase differently so it rains a lot in Portland, I’m going to say that very differently to Alex who used to live in Portland. So voice data is at the root of many of our problems with speech recognition.
Where this data comes from is really important. We mentioned before - the person in the audience mentioned call centres as a source for data. Absolutely correct. Voice assistants as a source of data. When we talk to our voice assistant, YouTube clips, podcasts. Speech data comes from where we use speech. Here if we put on our cybernetic lenses again we can start to see some patterns. Who uses those technologies? People who are rich enough to afford those technologies. Many of those technologies are internet-connected so you have to have the internet to use those technologies. So if you're not rich, if you’re not internet-connected we’re probably not collecting your voice as much which means that speech recognition won’t work as well for you. So we start to see the connection now between data and where it’s gathered and who it’s gathered from and now how speech recognition works or doesn’t work for particular people.
So what can we do about it? So cybernetics offers us many ways to think about systems, to think about where to intervene in systems and how to shape or nudge or put those systems on a different trajectory. So how might that work for speech recognition? So we could continue along the same trajectory and have speech recognition work for a small subset of the population who uses devices where speech is gathered or we could choose to avoid that trajectory and build speech recognition that works better for everyone and a lot of that comes down to data.
So the first problem we have is that we often don’t have a lot of data from people of particular genders, accents, age ranges. One of the projects that’s helping to solve this is Mozilla’s Common Voice Project and I need to be transparent here that I have a research agreement with Mozilla Common Voice. So Common Voice operates now in 100 languages all over the world and that language count is growing day by day. This platform allows us to gather the data that’s needed for speech recognition and it’s completely free. It means that companies don’t need to pay hundreds of thousands of dollars for speech data and last year we saw the world first release of speech recognition for Kinyarwanda, a language spoken in Rwanda by about 13 million people. Common Voice is now working on Swahili, spoken by 150 million people in eastern Africa who hadn’t had speech recognition until the last couple of years. So we need to gather more data.
I also want to talk a little bit about indigenous languages. So remember how I said there are 7,000 languages spoken in the world? We have speech recognition for about 200 of them. Many of the languages we don’t have speech recognition for are indigenous languages. We’re losing them at a significant rate. As I was doing the graphic for the sunburst in the first couple of slides that showed five-and-a-half million people in Australia speak a language other than English, I was removing rows that had a zero count in the data. I removed about 200 zero count rows of indigenous languages. Then it hit me, those rows existed because in previous censuses people did speak those indigenous languages and now they have a zero count. We’re losing our indigenous languages at an alarming rate and so the International Decade of Indigenous Languages is an effort by UNESCO to bring together stakeholders to produce more resources for indigenous languages. Speech recognition is one of the ways that we can help preserve those languages.
The other challenge we have with speech recognition and diversity, we don’t have good ways to test. Even if we have good data and we can train a model we don’t have good ways to test if that model works well for everyone and that’s largely because we don’t have the data from diverse people to test with. So again Common Voice is helping us with that problem, we’re able to get more diverse data. So we need to evaluate speech recognition a lot better.
For a lot of practitioners there are two ways that you can get voice and speech data. You can buy it and it’s very expensive or you can use free and open source datasets to train speech recognition models and if you’re interested the datasets are things like Common Voice and LibriSpeech. But often we don’t know what’s in those datasets. It’s a little bit like having cans in the cupboard where the label has fallen off the can and we need better methods to know what’s in the dataset, what languages are being spoken? What’s being said? So we need to get better at finding out what’s in some of our speech and language datasets, we need to get better at labelling them.
So tonight we’ve taken a cybernetic lens to the history of speech recognition, we’ve looked at the ways that different forces have shaped speech recognition technology, investment, lack of investment, hardware, different languages, commercialisation, open source. All of these have had a bearing on how speech recognition has developed to the present day. Cybernetics is also helping us to shape, to change, to create the futures that we want to live in.
So I want to end with that. Thinking cybernetically helps us to understand the past so that we can build the future that we want to live in. I'm going to end the talk there and let you know that if you’ve enjoyed any parts of this work, if cybernetics feels like your jam we’re now recruiting for our new masters program, the Masters of Applied Cybernetics and applications are open until September. If you would like more information my colleague, Alan, has a little bit more information, thank you, Alan. On that note I’d like to say thank you and open up for questions.
Applause
B: Thank you so much, Kathy, for that really, really interesting presentation. I know it’s got me thinking much more deeply about the way we interact with speech recognition technology. So we do have some time for some questions now. As the presentation is being livestreamed we just ask that you would wait for the – if you do have a question, that the microphone comes to you first just for the benefit of people watching online. So don’t be shy, any questions?
A: Yeah, you spoke about early speech recognition systems. I was wondering how they were teaching those machines to detect the words that were spoken?
K: Okay so it depends on the technology so I'm going to explain that using the shoebox as an example. So the way that the shoebox worked to detect speech, it broke up every word into three parts of speech and in linguistics we have types of sounds like we can have popping sounds like plosives, that’s what we call popping sounds. It broke up the numbers nought to nine into three sounds each and that was a unique combination. So if you think of the number five, when you say five it’s a plosive at the start, five, it’s very – five is angry. So four, all of the combinations of words, it had three things that activated and the recorder could determine whether the sound was a plosive, a hiss or whether it was like a soft sound and so it was able to distinguish based on just three types of sound in the word. But as soon as you start to get more words that’s not granular enough to start to recognise words but that’s how they did it for the shoebox.
A: Thanks very much for a great talk. I’m a sort of boomer and I was around when William Gibson started publishing his novels and how wonderful it was in those days. But I’ve got sort of two questions, the second one’s a sort of – not really needing an answer but the first one is speech ain’t speech. We know that if we asked Sheldon Cooper or Missy Cooper to listen to the same speech they’d hear it differently and so just a string of speech versus the meanings intended like the cultural and linguistic tendencies [unclear] 49:47 are really important. Where are we with respect to that ‘cause I think that's almost as important as the words being said? The second question which is really what I’d like in the world, is are we going to be able to apply that to listen to birdsongs and dog speech and things like that ‘cause I’d love to hear what the dog’s saying to me or the birds waking me up about in the morning?
K: Yeah, I’ve got a Corgi and I’m sure all he’s saying is food, more food. So I’m going to take the questions one at a time. The first question was around the meaning that’s hidden in language so I can say I love you, I adore you or if I’m fighting with my partner it’s I love you too, yeah. So what we’re talking about there is something called sentiment detection and so we are getting very, very good now at detecting things like - emotions like happy, I’m so glad to see you or sarcasm, yeah, I'm glad to see you too. So we’re getting a lot better at sentiment detection and that’s really a field of its own but we’re now able to determine using speech recognition that recognises just the words, whether there’s an emotion layer like an emotion signal on the top but we’re getting a lot better at that now too.
To your second question which is about nonhuman speech, it’s not my area so I don’t know a lot about it but we do have datasets that are available. I think I came across one the other day from LDC so LDC is like the dataset supermarket so you got to LDC and you go well I’d like some Portuguese from Brazil and I’d like some Portuguese from Portugal and I’d like this much data, please. So LDC has datasets of vervet monkey calls so we’re starting to see that appear more and more now in speech datasets but I think we are going to be able to get to the point where we’re going to recognise at least at a very basic level what animals are saying to us. It might not have the same level of complexity as human speech but I think we’re going to get there very, very soon. SeaQuest DSV where he’s talking to the dolphin, it’s beautiful. Excellent question.
B: Any more questions?
A: Thanks for a wonderful talk. My question’s about oral languages, oral dialects, things that are – we don’t have a written form for. I think the classic example in English is [sounds like George] 52:32. If anyone knows how to write that please tell me. But I'm also a Cantonese speaker and there are Cantonese words in that dialect that we can’t write in written form. There are languages that are oral only. My research is in music and content recognition of music and there are all sorts of sounds that are unnotatable. Doof-doof, we know how to write but the rest of them are a little bit more tricky. Have you looked into the oral unnotatable things – inequities with that? Thank you.
K: So again the orthography versus oral tradition, not my key area. What I do know about it, though, is that we are now getting speech recognition - or not speech recognition algorithms, we’re not using a method called transfer learning so when I showed the slide of the voice data – ‘cause this will help me explain the answer – you see there how we have both the audio and we have the written transcription? What we’re now able to do with transfer learning is remove the written transcription and just feed the algorithm the audio data and where we’re starting to get to a point is where we no longer need the written transcription. We think this is going to have a lot of impact or a lot of potential for oral languages or languages where we don’t have a – like a standard or a de facto orthography. So the key area there is transfer learning and we’re getting very, very close with that.
I think recently – did I see a paper? Maybe. There’s a language that’s used on Rapa Nui, Easter Island called Rongorongo – sorry, I might be wrong on that but it’s written like that [bustraphetal] 54:33 and there’s been a lot of attempts to try and identify even though that it is a written language, what it’s actually saying and we’re using transfer learning methods for those languages as well. We might try and learn from another Pacific language or we might try and learn from a similar language so transfer learning is what’s helping oral languages. We’re getting to a point where we don’t need the written transcription. Great question as well.
B: There’s time for a couple more questions. Back corner.
K: These are good questions, I’m going to have to brush up.
A: Is text translation for indigenous languages, is that in a better state than what it is for voice recognition?
K: Good question. Up until three weeks ago I would have said no. We’ve had – so I want to first of all distinguish between the two technologies. So we have speech recognition which takes spoken language and writes it into words and there’s a second technology called machine translation which takes two sets of written or a set of written words and translates it into another set of written words.
Three weeks ago Facebook released a model called No Language Left Behind, NLLB, that has 200 languages in that model, free to download, I’ve been playing around with it a little bit. Some of those languages are indigenous languages but they’re not niche indigenous languages. Again the method that they’re using in NLLB is transfer learning that we’re using for oral languages as well and so transfer learning and transformers – sorry, I don’t want to get too much into the technical detail but what they’re able to do is use one set of translations and then apply that to another language. They're able to pick up most of the translation.
There are still some phrases that we don’t or are not able to translate particularly well so some languages have really interesting idioms like – or that don’t translate particularly well. I’m trying to think of some good examples off the top of my head. So Indonesian, you have the phrase [micathy] 57:01 which means kind-hearted but the literal translation is good liver. So if someone calls you a good liver in Indonesian they’re actually meaning that you're kind-hearted. So some of those translations fall down with machine translation. I don’t know enough about machine translation with indigenous languages to answer with certainty but what I’d expect is that we lose a lot of the cultural knowledge, a lot of the situated knowledge with machine translation. So it might translate something like this is a particular type of tree but what it might miss is the cultural or the historical significance of that tree to the people that the language comes from. So I think there’s a danger in machine translation for indigenous languages as well.
This lady first and then we’ll go to the lady up the back. So lady with the green jacket first.
A: Thank you for your talk. If you ring a government department often they might say in Australia your voice identifies you and that’s absolutely amazing that my voice is so unique and there are 25 million people in Australia so everyone’s got this unique voice. I find that incredible. What about people that might be mimicking someone else? People that might speak American quite easily like a film star, they might change their voices, how’s that going to be because that would be totally different from their normal speech?
K: So really incredible question there. So this is a third type of technology. What we’re talking about there is speech synthesis so where speech to text technology, speech recognition goes from spoken words into text. Synthesis goes in the opposite direction so I feed a computer text and it speaks those words back to me. Now speech synthesis in the last two years has gone ahead leaps and bounds, even further than speech recognition and what we’re able to do now is called voice cloning. So with very limited data, maybe three or four minutes of voice data I can create a model that speaks exactly like me and now we’re starting to see the security implications of that technology. So someone could clone Kathy’s voice, ring up the tax office and identify as Kathy to the tax office. We’re at that point with voice cloning and with speech synthesis so it’s raising all sorts of questions about security, about identity but in the voice marketing space it’s also raising questions of copyright.
So there’s a very famous case. The lady who voiced Siri, Bev Standing, didn’t realise that when she was recording the voice for Siri that her voice would be used on Siri and so there she was, she did a voice recording, a gig for her agency then the next thing, she’s the voice on Siri on the iPhone. So we’re coming into issues of copyright, ownership of voices, even ownership of likenesses because it’s one thing to copyright a spoken recording but what do we do about copyright of a likeness of a voice that may not be generated by a person? So we’ve got all of these sort of regulation and copyright issues emerging. Another excellent question.
I think there was one from the lady at the back.
B: We may just have to make that the last one.
A: Thank you for your talk, it was very informative. You alluded to a problem with a gender disparity whereas females had a harder time having their voice accurately recognised. Is that due to the fact that the software was developed mainly by males and to accommodate male voices or is there an extra layer of complexity on that? Could you elaborate on that a bit, please?
K: Again another great question. So there’s multiple reasons why – and I’m using binary gender here for ease of communication, I recognise that there are more than two genders. So in general women speak with a very different fundamental frequency to men so if I put my voice onto an oscilloscope you’ll see that it runs at about – oh about 178Hz, about that. So women tend to speak about 20Hz higher than men but the speech in signal processing, women also have a lot more variance in their speech. When we start to do signal processing – again I don’t want to go too much into the weeds – we do something called a Fourier transform on the speech signal and it simplifies the signal. Because women have a lot more variance a lot of that variance is cancelled out by those transformations, there’s a brilliant paper on this from the ‘90s. That’s part of the reason.
The other part of the reason is that we just don’t have as many datasets that have women’s voices in so for example one of the most widely used datasets from Libri – for speech recognition is LibriSpeech but it’s predominantly male voices. Then this connects to things like men have more free time to do things like that because there’s a domestic burden. So there’s all sorts of linkages to things like who speaks to voice devices? Where does women’s data get recorded? Where are women using speech? So there’s all sorts of different things that combine to give that outcome, all the threads of cybernetics. Welcome.
B: Well unfortunately, folks, we have run out of time for this evening, thank you for your questions. Would you please join me again in thanking Kathy Reid for speaking to us tonight?
Applause
B: Thanks for coming and we hope to see you back at the Library again very soon. Thank you.
End of recording
People’s ability to use their chosen language is critical for human dignity and the general wellbeing of society. Language facilitates meaningful interactions with one another, enabling expression, and the transmission of history and traditions. And language allows us to construct our future. But what if the technology we use to construct this future doesn’t acknowledge your language?
Join Kathy Reid for an examination of speech recognition technology through a cybernetic lens. A communication tool that translates spoken words into text using voice commands instead of typing, speech recognition technology is increasingly serving diverse users in complex settings.
Kathy Reid is a PhD candidate at the ANU School of Cybernetics, and works at the intersection of open source, emerging technologies, and technical communities. Over the last 20 years, Kathy has held several senior leadership roles in the technology industry, and then decided she would return to study, completing a Master of Cybernetics at the School of Cybernetics. Kathy is currently investigating digital voice assistants and what we can do to ensure inclusion and representation is achieved in order to build more equitable speech recognition models.