Transcript 8 | “Listening Across the Tree of Life”, with Karen Bakker

 
 
 

Kate Armstrong

Welcome to our Interspecies Internet Conversations lecture with Karen Bakker. 

Today we will hear from Karen about her new book which has come out, which is called the ‘Sounds of Life How Digital Technologies Are Bringing Us Closer to the Worlds of Animals and Plants’ by Princeton University Press.

 So short introduction about Karen. She's a professor at the University of British Columbia, a Guggenheim Fellow, and a Fellow of Harvard University's Radcliffe Institute for Advanced Study, 2022/2023. Her research explores the interplay between digital innovation, environmental change, governance, and sustainability. She's the author of more than 100 academic articles, has conducted fieldwork in over four continents. And, in the past, has been a board member for the International Institute for Sustainable Development and a member of the United Nations Coalition on Digital Environmental Sustainability or codes. 

So this lecture will be about how digital technologies have enabled bioacousticians and bio- and eco acousticians to make remarkable breakthroughs in the study of nature's sounds. It presents the research discussed in her aforementioned book. Karen will share fascinating research and findings from her recent bioacoustic studies of turtles, coral, bats, and even plants. We will hopefully, and we look forward to discussing how scientists are mobilizing bioacoustics and ecoacoustics for environmental conservation and ecosystem regeneration. We're going to post the question how this does enrich current Scientific Research on interspecies communication. It was my very most pleasure to introduce Karen. She will discuss her research and findings for around 30 minutes to 40 minutes and then we'll open for discussion. 

So, please, enjoy, and the floor is yours, Karen.

Karen Bakker

Thank you so much. 

Can everyone hear me? Great. 

So, it is a tremendous honor to be here! I see many familiar names in the room and people whose work I've long read and admired and indeed cited. So I very much look forward to the dialogue. Many of you have done groundbreaking research that it has been very, very intriguing for huge numbers of people. So, I should note that in covering this research in the book, I am a synthesizer, I'm an intellectual magpie. But none of the research I'm presenting here is really truly my own. It's yours in that sense. I'm reflecting back at you some thoughts on exploring this marvelous world of research on ecoacoustics, bioacoustics, and interspecies communication. 

I'll get to the slides in a moment. But you might be wondering, why did she write the book if she doesn't do research on this topic? Well, at UBC I run a project called the Smart Earth Project and the Broad Rubric of that project is to study the intersection of digital transformation and environmental change. So, my perspective, having originally been trained in physics, as well as environmental studies, and done a PhD focusing on climate change – my perspective on these issues comes from an environmental science and environmental studies perspective. I've also long had an engagement in the tech sector that I can talk about in the question period, if you like. All of the influences have converged in the Smart Earth project, which has a number of different dimensions, one of which is tracking all of the very interesting innovations, essentially at the intersection of our newfound ability to manipulate both digital and biological codes, for example things like biobots. I also study the use of digital technology for conservation applications, and some of that I will discuss today within that broader research project. 

I'm writing a trilogy, and the Sounds of Life is the first book in the trilogy. The Sounds of Life tells the story of the rediscovery of communication across the tree of life by scientists, some of whom I'll feature in the talk. 

The next book, just to get sense of the narrative arc, then asks what the conservation implications might be, how we might fundamentally alter environmental governance in the 21st century based on these newfound scientific insights. And, for example, it is now possible to design regulatory regimes that enroll bioacoustics for the protection of species and that change the basis for conservation, for example, mobile protected areas rather than protected areas that have static or fixed boundaries. 

So, I'll be happy to discuss some of those applications in the Q&A. Now, that I've given you the big overview of my intellectual project, I'm going to dive into the Sounds of Life and I'll share my screen. 

Can people see that? 

So, I'll briefly run through a set of themes in the book and set the stage by exploring something that is, of course, well known to many of you. That is the remarkable array of sounds made across the Tree of Life, the biophony and geophony of the planet and its inhabitants. I'll then talk about why the science of bio- and ecoacoustics has undergone an explosion in the past 10 or 15 years. Essentially, digital transformation had rendered it much easier to record large data sets all over the planet. And, then the use of artificial intelligence to decode patterns in those data sets has led to some remarkable findings, and I'll talk about some of those. I'll focus my discussion of those findings on presenting some very interesting research results on a variety of species that I think rekindle, and expand the terrain upon which we can discuss the existence of non-human language, a controversial term, as I'm sure you are very well aware. I'll also briefly touch on the implications for interspecies communication. But I should signal this is not the focus of the book. The book itself has one chapter, or maybe two on interspecies communication. But hopefully some of this will be new because I'll talk about work being done on, for example, honeybees that may be less familiar to some of you than the work that's being done on whales. Finally, I'll conclude with a very brief discussion of the growing awareness of the urgency of addressing threats of noise pollution for humans and non humans alike. 

Now, in making these arguments I'm focused on the contemporary period, but let's set the stage by making a historical analogy. I'll return to this analogy at the end of the talk. My analogy, which is very much up for discussion, so I welcome critical feedback, is that acoustics, or sonics as one might want to call it, is the new optics. Optics being central to the scientific revolution. The discovery and proliferation of the telescope and microscope, of course… Having such practical implications for the advancement of science several hundred years ago, but also very important philosophical implications in decentering the human. Our ability to use the telescope to see further into the heavens and eventually back into time. Our ability to identify what Antonie van Leeuwenhoek called “animalcules”, leading to contemporary insights about microbiomes or holobionts as Lynn Margulis liked to call them. That's a centuries long unfolding of scientific discovery. But my argument is that today with sonics, we, we are just at the beginning of a similar arc of scientific discovery. And some of you might be aware of course, that there are analogies for bioacoustics in other fields like astrophysics, as we've begun listening to the universe and perhaps even in quantum biology. That is the universal importance of sound or tromology – it's the sensing of vibrations at multiple scales – is just starting to work its way into all sorts of scientific endeavors, and I'd love to have a conversation about that. But on to this part of my talk – I want to play a guessing game and I hope you can hear this. I'm going to play a sound and I'll ask you to guess who is making this sound?

I'll give it about 30 seconds, and you can put some suggestions in the chat. 

So, that the bat, typically when people hear this sound, when I give public talks, I get a guess that this is a bird. It's a bat. And, I'll say more about bats later in the presentation. Of course, the research that's been done on bats, some of you might be familiar, people like Gerald Carter, Mirjam Knorrnschild, who have done extensive work using bioacoustics to probe further into the capacity of bats for vocal communication and vocal learning distinct from echolocation, which was, of course, the latter being discovered over a century ago. But these recent discoveries about bat communication are pretty startling, and I'll discuss some of them later in the presentation. 

Advancements in Animal Communication Research

Bats illustrate the fact that even species we knew to be vocally active—we are able to push the boundaries of understanding vocal communication much further with the advent of digital bioacoustics. Dolphins, of course. Diana, I won't say anything about dolphins because I feel like I would be mansplaining, but I hope you weigh in during the question period. In many ways, of course, dolphin research led the way and inspired many other researchers to begin asking those "what if" questions across the tree of life. One of the key points that I think bears emphasis here is that much of the earlier research was done on species that are vocally active in the human hearing range. What is new today is the use of digital bioacoustics and ecoacoustics to listen to species that are primarily vocally active at frequencies beyond the human hearing range—that is, in the ultrasonic as well as the infrasonic. This is a tarsier. Tarsiers echolocate and communicate in ultrasound. This is your long-lost cousin. At some point, a common ancestor may have had an ability to echolocate. Humans have lost this ability, except on rare occasions when individuals who are blind, or usually born blind, develop the ability to echo range. It is a truism, but it bears emphasis: humans tend to believe that what we cannot observe does not exist. Until recently, the communicative capacities of species like the tarsier that largely vocalize at frequencies beyond the human hearing range were overlooked.

These were not well studied or their capacities were not even known. By way of analogy, of course, think of Katy Payne's fabulous work. She, of course, would merit a whole presentation on its own. For those of you who don't know her work, I highly recommend looking up some of her books. But Katy Payne's work on elephants with the discovery of infrasound communication is something I suggest is now being replicated across the tree of life. So the elephant discovery was itself astounding, of course. But what's even more astounding, as I'll get into in a few minutes, is our discovery of the ability of species without ears. Without any apparent means of hearing to sense ecological information embedded in sound. Now, I can't move on from elephants without having a little bit of their voices come into the room. So here we go.

I never get tired of listening to elephants. Time does not permit. But for those of you who are very interested in elephants, I, I'd like to refer you to the work of Lucy King. So following on the work of people like Katie Payne or Joyce Poole, Lucy King has done some very interesting work in the past decade that seeks to identify the, the meanings embedded in different elephant sou. And so one example is her work that demonstrated that elephants in East Africa, where she is working, have a very specific alarm signal for honeybee. Now, pretty much nothing terrifies the mighty African elephant more than the tiny African honeybee. The bees get inside the trunk, in the ears. The elephants have very specific behaviors in response to this alarm grouping together, even coming together over fairly long distances and showering one another with dust. Those behaviors are very distinct from the behaviors they exhibit in response to other alarm calls. Lucy King determined this with playback experiments. The elephants also have specific alarm calls for humans. They can describe the different threats that humans pose. They have specific alarm calls for human hunters, male adult human hunters from one tribe, and specific calls for male humans from another tribe that do not pose a threat. I saw Con Slobodchikoff was on the call, and this is reminiscent of his work on prairie dogs. It turns out that animals are describing us with much greater specificity than we are capable of describing them. 

Now, elephants perhaps we might almost intuitively understand as being capable of doing this because of our closeness on the tree of life. Large brained mammals, charismatic megafauna. But I do want to point out that a set of other discoveries is expanding the set set of animals which are communicating at frequencies we did not suspect. Peacocks are a great example. Some of this research was actually done in Canada, where I'm from. Of course, the celebrated mating dance of the peacock has been known for centuries. But only recently have researchers really begun to observe the degree to which the production of infrasound by peacocks is central to the mating dance. It turns out that the biomechanics of the peacock's tail act like a resonator. We can't hear it, but certainly the peacocks can. The infrasound vibrates the comb on top of their heads. It vibrates the comb on top of the peahen's head. The characteristics of that sonic display turn out to be important for mate selection by the peahens and of course, may have an evolutionary advantage in terms of communicating through shrubbery or in environments where the peacocks remain hidden from predators. But still want to signal to mating partners. Many species, of course, are sensitive to infrasound. Tigers and beavers, rodents. The expanding pantheon of species to which researchers are applying playback experiments to determine this sensitivity to sound is rapidly expanding there – I don't know if you saw recently in the news, there was one researcher who just went out and listened to turtles and found out, although this turtles had in general, thought to be largely vocally inactive or mute, it turns out that every single one of the over, I think, four dozen species that were studied was vocally active. So, we know that there's a lot of communication that is ongoing. And it, of course, is, if you like, enhanced or complemented by geophony, the biophony, the sounds of living organisms, complemented by the geophony, the sounds of, if you could hear at these frequencies, calving of glaciers, thunderstorms, volcanoes, all generating infrasound, as do phenomenon like waves crashing over continental shelves. That creates a very low, steady rhythmic beat, a bit like the drumming heartbeat of our planet. Earthquakes can generate an infrasonic tremor in the atmosphere, which sort of rings the atmosphere like a quiet bell. It is probable that a much larger number of species than we previously understood are able to discern ecologically meaningful information from infrasound. So to conclude this brief overview of nature symphony, one might think of the silence that we often think we encounter in nature as an illusion. Humans being creatures that privilege sight or hearing, there's simply an enormous amount of information communicated through acoustic sonic means, which passes us by.

Impact of Digital Technologies on Bioacoustics

The advent of digital technologies is changing this. The advent of digital technologies in bioacoustics and ecoacoustics, in particular, has transformed the ability of scientists to record nature's sounds with ease. Whereas the equipment required to do a lot of the early recordings might have fit into a minivan, and you would have still had to tie a lot of stuff on top of the van, you know. Now these technologies are literally the size of a smartphone, and some can even fit in your back pocket. With digital transformation, of course, comes automation, miniaturization, reductions in cost, and this makes these technologies accessible to citizen scientists. Some of you may know about some of these devices when I give these talks to the general public. People are quite excited about the audio moth pictured here on the left, which, of course, you can buy, which, or you can order the parts and build it yourself. It's sort of a DIY Open Source deal. So there's been in addition to a rapidly expanding number of researchers using digital technologies to record in all sorts of places, from the Amazon to the Arctic, there's a growing number of citizen scientists. And that citizen science movement and the use of crowdsourcing to label data, et cetera, has given this entire agenda additional energy. 

Logging Practices and Biodiversity Insights

If you like the visual analogy or the analytical analogy for the microscope picture I showed at the beginning of the presentation is of course, the spectrogram. That is the graphing techniques that one uses to map frequency against time. And thanks to scholars like Bernie Krause, we have an intellectual frame with which to parse and analyze the spectrogram. This particular one we can see insects, birds, rain, different frequencies, different times of day. The reason spectrograms which are useful is because are trained ecoacoustician that is trained to listen to entire soundscapes or the ensembles of sounds made by landscapes. The reason that this is so useful is because this is like a snapshot of ecological health. If species are absent, an acoustic niche simply might disappear. And over time we see an acoustic simplification of the soundscape. Now this allows very sort of efficient, low cost, unobtrusive monitoring compared to human monitors or traditional visual means. I like to say a camera can catch a view of an animal walking down the forest path, but a microphone can hear them hiding in the bushes as long as they're vocally active. With the use of this sort of technology, researchers have, for example, discovered entirely new species. A new species of blue whale was discovered in the Indian Ocean just a few years ago, revealed not visually because they spend most of their time offshore, deep under the surface, but rather acoustically because of their unique dialects, their unique vocal signatures. Similarly, in some cases, we're able to detect species decline with much greater accuracy than visual censuses, or we're able to hear species that we thought might have been extirpated in a particular place. In fact, they are present. We can hear them even if they're hiding and we can't see them. This research is very useful. I want to show one particular application just to give you a sense of how this is now being used in conservation research. So in particular, notably in Latin America, I'm going to shorten this just given the time we have, the use of spectrograms to compare different landscape interventions is now increasingly common. This is an image from one paper. I've cited it at the bottom: “Remote Sensing in Ecology and Conservation”. The researchers present three soundscapes from a sample of three forest management types, and they're interested in determining which type of forest management enhances or degrades biodiversity. The central image is the spectrogram of the control. On the right you have another plot in which logging occurred, but not ecologically sensitive logging. This is non FSC, which means non Forest Stewardship Council. So you clear cuts roads, the typical logging sort of activity. You can see the decrease in biodiversity from reading the spectrogram, you can see those holes appearing in the spectrogram. But look on the left. And here is another plot that has been logged using FSC (Forest Stewardship Council) techniques, which would imply selective logging, no roads or very few roads, much less ecologically invasive. What you see is that actually this type of logging increased biodiversity. You can see the new acoustic niches that are filled by different species that are present in the left hand spectrogram, but not in the middle. What's going on? Well, if you think about it, as you remove some trees, you move from a uniform canopy to a forest of different heights. Ecological succession means different plants, and this is what leads to greater biodiversity in the ecologically logged landscape. This is just one example of how researchers are using spectrograms in practical applications. Beyond this, there is a scientific agenda that's centered around building ecoacoustic indices which can reflect the health of ecosystems and incorporate those into environmental monitoring and environmental impact assessment protocols. That's quite powerful to do, particularly in marine environments. 

Non-human Language

Now, in the interest of time, I'll just move right along. I love the conservation applications and welcome questions about those, but I did want to cover another important implication of this rapidly going research agenda on bioacoustics and ecoacoustics. For the book, I surveyed a few thousand research papers, over 4,000 researchers, so there is a lot to say. I can only skim the surface in today's presentation, but I'll just give you the work of a couple of researchers as an example of the sorts of findings that are sparking debate. So Gerald Carter at Ohio State studies vampire bats, which I think is pretty cool. If I ever went back and did another PhD, I would love to do one on vampire bats. He has a great job. So his work entails the use of bioacoustics, digital bioacoustics recordings combined with positional data that is derived from putting very small tags like air tags onto bats. And there's been some really cool advances in digital tag technology in the past 10 years whereby, and he's pioneered some of this whereby you don't have to retrieve the tag to get the information that anytime a bat that has a tag comes close to another bat with a tag, they mutually share information. So, you don't have sort of the problems you once did with these kind of more rudimentary tagging systems. So, Carter's work, along with other people like Mirjam Knörnschild or Yossi Yovel and Tel Aviv, have helped us see further into the social world of bats. When echolocation was first discovered by Donald Griffin nearly 100 years ago, researchers reluctantly admitted that bats biosonar was more finely honed than our finest at the time, military devices or medical devices. But many researchers were very, very reluctant to believe that bats were vocally active in any other way. And you know, it took several decades. But what we have now learned is that many species of bats are extremely vocally active for the purposes of communicating information. And by using the sets of technologies that Carter does, whereby we can track where bats are in space and time, who they're interacting with and what they're doing with the acoustic information, we begin to derive some pretty cool insights. For example, bats hold favors, they hold grudges. They have individual vocal labels for family, for kin, for individual bats for gender. They trade food, for sex. They socially distance and go quiet when they are ill. They, they learn, they engage in vocal learning. This is Knörnschild's work on the greater sac winged bat, which is has the benefit of being a species of bat that is active during the day. So her work has demonstrated that just like human parents babble at their babies and the babies babble back, eventually moving from babblese to adult language, baby bats do the same thing. And, as they learn to communicate, the male bats are also learning their family songs, which are important vector of culture and a mechanism for territorial maintenance in these very interesting and highly competitive groups. So bats are capable of vocal learning, something that was not known a few decades ago. And we're essentially able to determine this given advances in digital bioacoustics, our ability to record large data sets with ease, and our ability to parse the patterns in those data sets with the help of AI. 

Camilla Ferrara, another great example. Her research is amazing if you don't know it. I did highly encourage you to look her up. She works with the Wildlife Conservation Society in Brazil and her work is on turtles. Now herpetologists, as turtle biologists are called, had long believed that turtles don't make noise. So, when Camilla Ferrara, and another PhD researcher in Australia, Julia Giles, who started their PhD research around the same time, declared to their research supervisors that they were going to study vocal communication in turtles – they were laughed at – they were told they would never get their PhDs! But, Camilla persisted, and her research is astounding. I'll just mention a couple of the findings. She has identified over 200 unique vocalizations made by the main species she studies, which is the freshwater American, sorry, South American river turtle, Podocnemis expansa. So, that species had the unfortunate fate of being highly prized by colonial settlers for meat. Although, they were once more numerous than mosquitoes in the Amazon, in the words of Bates, the naturalist, they are highly endangered now. And, yet these freshwater turtles still exhibit this seemingly uncanny capacity to gather over long distances across the reaches of the Amazon at the nesting beaches where the mothers lay their eggs every year. And how they do that remains somewhat of a mystery, partly unraveled by Camilla Ferrara, who's demonstrated vocal communication by turtles. Now, why, you might ask, had we not discovered this before? It turns out turtle communication is very low frequency, very intermittent. They're not very, very vocally active. They're not chatty. There's a very long turn taking interval between one vocalization by one organism and then the response. Essentially, we could have heard turtles earlier had we chosen to pay attention, but scientists didn't. In recording these sounds, Camilla has begun to answer this mystery of the coordination of turtle behavior. But. But even more astoundingly, she discovered that turtle embryos make noise in their shells before they hatch. Now, the way she discovered this was an accident. Microphones were inserted into the turtle nests on the hatching beaches because she wanted to see when, after birth, the baby turtles would start to make noise. In fact, she found, to her astonishment, they were making noise before the shells had even begun to crack. And it turns out, using specific sounds, to coordinate the moment of their birth. So, this has obvious advantages in terms of increasing success, as the baby turtles incur a risk from predators as they move from the nest to the water. Still an astounding finding. And even more astounding, she also determined that mother turtles that were thought to abandon the nests after laying the eggs were in fact wading in the water nearby, calling to the baby turtles and also monitoring the mothers and the babies with drones and tags. She has determined that the mothers guide the babies to safety in the flooded forest away from predators. Thus, she found the first evidence of parental care in Chelonians, all thanks to bioacoustics. 

So, Podocnemis expansa. You know, as alien as reptiles might feel, it's still somewhat believable for us. Why? Because turtles do have ears. Although very different than ours. One of the most astounding findings in bioacoustics, however, has been the discovery that species without ears or any apparent means of hearing, including plants and coral, are able to derive ecologically meaningful information from sound. This is Steve Simpson. He does work on fish larvae, coral larvae and coral reefs. And I'll briefly just talk about his research because it's so fascinating. Some of you might know of his earlier research where he documented the ability of fish larvae to distinguish different sounds. You can do the experiments in the lab with choice chambers. It's like a maze style experiment you could normally use with animals, but he uses it with all sorts of organisms, including coral. I'll get to in a moment. The fish larvae are presented with different arms in the maze at the end of which are different sounds. It could be white noise, it could be human music, it could be the sound of a degraded reef or a healthy reef. The fish larvae consistently pick the healthy reef. In also open ocean experiments and large scale tagging of fish larvae, they've determined that fish larvae can actually hear the sounds of reefs and navigate towards them. Using a similar experimental methodology, Simpson has also demonstrated that coral larvae are capable of the same thing. I wish I had a picture of coral larvae. I should put one in the presentation. They are microscopic blobs, no central nervous system. They are covered in cilia. But the possibility that they could sense sound seemed outlandish even to Steve Simpson. A group of Dutch researchers suggested he run the experiment just to rule out their wacky hypothesis. But to his astonishment, he found when he ran these experiments with coral larva, they, like the fish larva, could distinguish the sounds of healthy reefs. Even more astoundingly, when presented with the sound of a random healthy reef versus the sound of the home reef, the reef where they were born, the coral larva actually selected the home reef. For those of you who dive and maybe have seen a mass coral spawning event at on the Great Barrier Reef where Simpson works. These are these underwater fireworks that occur at a full moon across the vast expanse of the reef. The coral lava washout to sea. So they have a very brief window of time on which to imprint on that the coral reef sounds. So somehow they do imprint on the coral reef lullaby and they are able to detect this even across up to a mile of open ocean and, and swim back towards the reef. The best guess we have about how they do this is they use their cilia, of course, just like the cilia inside your ears, allowing you to listen to my presentation. The cilia can sense particle motion in the water. That doesn't provide a full explanation of how they do it, but it provides an explanation. The mechanoreception of how they could receive the acoustic information in the first place, but how they actually then understand or process the information to then swim back to the reef remains a really wonderful mystery.
Now, what are all the implications of this for interspecies communication, which is the focus of your community? I mean, one thing I will say is that I think it's time in the scientific community to start discussing a hypothesis that runs something like this. Rather than assuming that unless a species is proven as sort of vocally active, rather than assuming that that species is insensate to sound, we probably need to flip that assumption on its head and assume that all living organisms are, until proven otherwise, sensitive to sound. 

From an evolutionary perspective, this might make intuitive sense. I mean, the ability to sense sound, which is a signal that travels relatively quickly at relatively low-cost to produce, as opposed to, let's say, a biochemical signal sensing that information could have an evolutionary advantage – of course, organisms like coral larva have not evolved eyes, but they nonetheless could have an evolutionary advantage from being able to sense acoustic signals in the marine environment. So, I'll look forward to your questions and discussion on that point. But let's move on to interspecies communication briefly, and then I'll stop. 

Interspecies Communication

One thing I will say about this current agenda, which is fast proliferating, is the world is already engaged in interspecies communication between different species that have ecological relationships. One example, is Yossi Yovel's work in Israel that has documented the conversations between bees and flowers. The sound of a buzzing bee played close to a flower is sufficient to cause that flower to generate increased nectar flow of a nectar of increased sweetness. And there is a hypothesis that the flowers are also emitting sound to attract the honeybees. Yovel also did some very interesting experiments on tomato and tobacco plants that make a very faint ultrasonic sound. The sounds are different depending on whether the plants are well hydrated and intact, dehydrated, or wounded – through cutting the leaves – Yovel recorded the different sounds made by his test plants and then trained an algorithm to differentiate between those sounds and developed an algorithm that, just by listening, could determine whether the plant was hydrated, dehydrated, or injured. If the algorithm can listen, presumably insects, because we know some insects can hear in ultrasound, like moths, perhaps other species are listening to. So we're only at the beginning of the discovery of all of these interspecies forms of communication that are ongoing. 

Another very interesting avenue is acoustic tuning between different species. Bats as pollinators have co-evolved, it seems, to be acoustically attuned with certain plants whose flowers and leaves have specific shapes that are highly attractive to echolocation. This might be because the leaves or flowers or the plants shapes have evolved to reflect echolocation in a certain way. So, like this tropical vine, which has a shape that's like a cat's eye mirror, acoustically invariant to a bat, will attract the bat and thus increase pollination success for the plant. Acoustic tuning may also be widespread in nature. And again, this is only a research agenda that's getting off the ground, now. In addition to this sort of interspecies communication and acoustic tuning, scientists are of course, and many people on this call might consider themselves part of this community that are searching for something like a Rosetta stone. That is of course the archaeological artifact that enabled scientists to translate between the then unknown Egyptian hieroglyphics and other scripts that they were able to read. The question that researchers are now posing today is whether there is a contemporary equivalent of a Rosetta stone enabled by artificial intelligence, perhaps that would enable us to decode the communication patterns of other species and perhaps communicate back to them. And some of you of course will be familiar with the sort of work that's being done with embeddings and the claim that because natural language processing algorithms are now so good at translating between different human languages, even without a pre existing bilingual dictionary, that one might be able to imagine a similar methodological approach to non-human languages. Now philosophers would resist these claims – I'll just summarize one or two of the arguments very briefly – The assumption that other species would exhibit sort of linguistic or communication patterns in any way analogous to humans overlooks a point that Wittgenstein, or philosophers like Nagel have made, that our embodied experience of communication is so important that this might and so unique that this might prevent interspecies translation. Wittgenstein is famous for saying: “If a lion could speak, we could not understand him”. And of course, Thomas Nagel's famous paper: “What is it like to be a bat?” Argued that even if that had a sense of consciousness and language, we could never understand the bat because of this mutual intelligibility problem. To put it succinctly, I think time is ripe for conversation about whether Wittgenstein and Nagel are about to be proved wrong because they didn't account for the invention of a biodigital intermediary that is, you and I can never buzz like a bee. 

Philosophical Perspectives

You and I can never echolocate like a bat – But maybe our computers can, maybe our soft robots can. Again, it's an open question, but it is a very interesting question that a number of researchers are now pursuing, and I'll just conclude with some research by Tim LandGraf on honeybees. Of course, honeybee language has been observed since antiquity. Carl von Frisch wins the Nobel Prize for studying it in the mid 20th century. Honeybee language we now know is vibrational, acoustic, positional relative to gravity, and the position of the sun because bees can see polarized light. So, very complex language. But it may be possible, and some researchers are trying to combine computer vision and bioacoustics to create, in a sense, a honeybee dictionary. This is one of the researchers, Tim LandGraf, in Berlin. He's a computer scientist and an engineer who's compiled these large data sets of honeybee signals, only a fraction of which we have any sense of what they might mean. We know there's a stop signal. We know queens have their own unique signals. We believe some other signals might translate to human concepts like begging. But really we have very, very little idea what these bee sounds mean, except for one, the waggle dance, for which Carl von Frisch his Nobel Prize. What Tim Landgraf has done is to, and I'll just show you a brief video from his lab, is encode the waggle dance into a robot to attempt to convey information to the bees. The waggle dance conveys information on the location of a NEC source. The angles, the duration. All of this is sort of encoded information for bees that can tell each other about new nectar sources with great specificity, great accuracy and precision, even over long distances and through many obstacles. Now, Tim LandGraf's robot is not yet really working very well. He's only really ever managed to convey the location of a new nectar source once with this robot. But this is illustrative of the kind of work that is now being done that is combining the analysis of communication and acoustic patterns with soft robots that may one day be able to speak back to other species. 

Now, this raises very serious ethical questions because this kind of technology could be used to further domesticate and exploit, as well as develop a deeper understanding of and empathy for other species. Certainly some of the playback experiments that are now being proposed, I think require ethical scrutiny, don't always get them, and I do think this is a matter of concern. In closing, I did want to mention a couple of other Projects. There's the CETI Project. There are a team of researchers, including David Gruber, who's doing some very interesting work that grew out of the Radcliffe Institute for Advanced Study. There's also the Earth Species Project in Silicon Valley, a number of projects looking at species like sperm whales, of course, all of that building on the work done by innovators like Diana Reiss. We have not yet, of course, broken the barrier of interspecies communication. Yet, one thing I wanted to just open up to the group is as specific researchers are trying to do this, how can we begin to integrate this rapidly expanding set of understandings about interspecies communication across the tree of life between other species that are already engaging in rich dialogues, many of which we're only newly becoming aware of? 

I'm out of time. I think I'll stop. 

But if we have time, I actually would also like to talk about noise pollution, because all of the research that I've covered has also uncovered some very important findings about noise pollution and its effect on animals and even plants. But, I am really keen to get into the discussion, so I'll stop sharing and welcome your questions.

Q&A

Kate Armstrong

Thank you, Karen. Such a wealth of information. I think that everybody is reeling with this incredible synthesis of the field. So this is really, really interesting for all of us. I'll encourage everybody, if you'd like, to ask a question or comment or pick up on any of these great provocations that Karen made, just to use the raising the hand possibility that you have, or you can also, of course, jump in on discussion. I will let you, Karen, take the Q&A. And we can just kind of do this as organically as possible. So I think Timothy has his hand already raised if you want to start off.

Timothy Schwinghamer

Thanks. I'm actually really interested in finding statistical methods for the analysis of acoustic or sonic data. And I was wondering if in your reading you came across rather than AI methods, more statistical approaches, rather than like. Which is to say that I'm not looking for, like, machine learning methods, but rather statistical methods that would allow us to actually understand the processes. 

Karen Bakker

Right. So. Okay, so your question is quite rich. And for the benefit of the group, one of the issues with the machine learning methods is they identify patterns, but they have no understanding. They're narrow, they're brittle. That's why, of course, AI, when applied to human language, is so prone to hallucination or confabulation. Right. With GPT3 or Galactica. So, one should regard machine learning as a tool for researchers to move more quickly to identify patterns. But at the end of the day, it takes that painstaking research of these organisms in the field to understand the ecologically meaningful information that might be conveyed or linked this to behavior. We shouldn't assume machine learning is the holy grail. It and the limits of AI applied to humans, human language are illustrative of this. Although, I would add as a footnote that I do think classical AI methods and neurosymbolic approaches or hybrid approaches that would use neurosymbolic with machine learning will probably do much better than the current sort of machine learning dominated approaches. 

Okay, now onto your question about statistical methods. Okay, so I don't actually have a good answer for you. So I'm so glad you asked the question, but I'm going to be completely transparent. I don't have a good answer, but I have another question which is… So, I guess one the challenges that I perceive as one gathers these large data sets and if there are any of the CETI researchers on the call, I'd love to hear from them – is that when we have these large data sets we may be constrained by certain human concepts like phonemes. We can't assume that non-human languages have anything analogous to vowels, consonant, phonemes, morphemes. Maybe whales communicate in 3D hieroglyphics. Maybe species have different languages for different places and different times of year. Maybe the whale language spoken spoken in the fall is different than whale language spoken in the spring. Like we, there's so many assumptions that we have that we would have to dispense with. And, so, I admit I am personally at a loss so I would like to throw it back to the group. I see David has his hand up. I don't know, I'd like to hear what other people have to say on this question.

Oh, maybe David doesn't want to speak on this question. Well, the only thing I can say.

David Rothenberg

I wondered if it was my turn. Thank you for your fabulous talk and this wonderful book, like the most comprehensive book on this topic that's come out in quite a while. On the statistical methods question, I think that's what most people have been doing in this field for decades, all kinds of very good, useful statistical methods. I think the AI method is to try and use that tradition to go further and try and find entities we might not notice because we're so human. But actually I had another question which is I wondered what you thought of the possibility that some of these sounds, either those we hear or might not be able to hear, that they might have a function closer to music than language and that they might have this important sense of meaning. But we might not even know what it is. Like we don't know what music means, but it's very important to us. It communicates all kinds of things. We know that Steve said that music may have evolved before language and human evolution. Most people tend to disagree with him. But whereas Wittgenstein said if: “a lion could talk, we would not understand him”, because we're so far away from lions. Maybe if a lion sound is seen as singing or music, it's immediately more comprehensible. And, whether that's accurate or not, I think that's one of the reasons why people have, in so many human languages have talked about animal sounds as songs and music because it immediately more accessible than a language that we can't understand. I wondered if you had any thoughts about that.

Karen Bakker

Yeah, I mean, sort of… As you were speaking, a two by two matrix popped into my head. You have songs and you have so speech and then you have with words and without words. Okay, so humans exhibit all four of those, although speech without words, you know, gibberish. Well, but truly we do all four of those things. Are other species doing all four of those things and how would we distinguish? Right. And of course animals do more because then they're also using sound to echolocate, to navigate. So, they don't have a two by two matrix, they have a much bigger matrix. And how does one distinguish? I think your question is a little more profound than that though. Beyond the categorization, it's to do with what we might call culture, an emotional content to the vocalizations. And of course, contemporary bio- and ecoacousticians are very careful to avoid any mention of this. They basically use terms from information theory. They don't talk about language, they talk about communication, they don't talk about meaning, they talk about information, they don't talk about names. Usually they talk about individual vocal labels or you know, euphemisms. There are a lot of euphemisms being used here. I will say when I ask researchers this question though, I get a very interesting response back because I mean, either they don't want to touch the debates because they're explosive, or they might say something like we can't begin to assume they're songs because that's too anthropocentric. Carl Safina has a very nice comeback to this and he argues so not wanting to impose Human categories in other species is wise because that would be a sin of commission. But it's unwise to then commit the sin of omission by not asking the questions what if? We shouldn't assume that songs exist in other species, but that shouldn't prevent us from asking what if? So somewhere in that uneasy spectrum between sins of commission and sins of omission, we need to be asking the what if questions. I do also just want to mention that I think we might want to expand our notion of what constitutes songs in the sense that most people would hear the word songs and they think they're sung for pleasure or amusement or hoarding. Right. And throughout the book, I interweave deep listening with digital listening. And I talk a lot about. An initial spur for this book was some of the work I've done with Indigenous communities and indigenous knowledge. And so song lines, the notion that songs are a way of encoding ecological information over long timescales that is expressed in Australian Aboriginal culture. I think there is something very important there. One could ask, for example, are there song lines in other species that would be analogous, let's say in Wales to some form of oral history of the evolution of the ocean? And they would sing those songs at certain places. So, again, I think we would need to be very creative in answering your question and not assume songs are simply for pleasure or entertainment.

David Rothenberg

Yeah. Do the researchers you've talked to, do any of them consider that the criteria and terminology they use are just another example of human culture? Because they are, you know, they're choosing terminology they think is somehow objective, but it's just another human thing to do, like nature.

Karen Bakker

I mean, if pushed. If pushed, I'm sure they would admit that. But as we all know, disciplines are both enabling and constraining. And one, once you as a group have decided on a common set of terms and discourse to advance the empirical field studies, you're not going to be questioning them and redesigning them with each paper. But that's why I think this bigger conversation, that the scientific community is very ripe for this. So some of them will stick their hands up and have that conversation. Not all of them, but some.

David Rothenberg

I wonder if that's one reason you didn't say so much about whales in your book. Because every scientific paper on whale sounds creates new terminology. They're always trying to reinvent this stuff, which makes it very frustrating to write about.

Karen Bakker

Yeah. And the other reason, I mean, whales have been beautifully covered in so many places, but the focus of my book was on the discovery of hidden sound. So whale sound is hidden, then we realize it. Bat echolocation hidden, then we realize it. And today the equivalent is coral, turtles, plants. I didn't even get to talk about plants in the presentation. So my emphasis was on species we don't realize are vocally active or acoustically active rather, and species we don't realize are sensitive to sound. Which leads us to this big hypothesis that every living organism on the planet, unless proven otherwise, is sensitive to sound, is sensitive to acoustic information. And that's why, I mean, I could have written a whole book on whales, but I was going for this bigger argument.

Karen Bakker

Steve has his hand up. Okay, Darcy, I just want to say there is a possibility that a non-degreed person could work in this field. But I think your entry point is citizen science. I did put some examples of sound walks, citizen science projects and apps on the website thesoundsoflife.org and of course if Zooniverse is a great site to go to to look for active citizen science projects. And I do think that is your entry point into this. So hope that spurs some interest. Steve, I see you have your hand up.

Steve Crocker

Thank you very much. This is a really spectacular experience for me and I'm sure for the rest of us here. I have had almost zero experience in this area, but a couple thoughts have come to mind while listening. I want to put together Maslow hierarchy of needs and a evolution approach to this. It would seem to me that first and foremost that the communication within a species is almost certainly tied to the fundamentals of finding mates, finding food, avoiding predators, and any other things that are at the bottom of the Maslow hierarchy in some sense of just survival and sustainability. With additional things like music, serving perhaps more evolved things that may be emergent, but just focusing on sort of the bottom of that scale. It would seem to me that the, that no matter what language… what species you look at, there will be some mechanism for some method of characterizing and communicating those basic concepts. And whether or not those mechanisms or the way it's done is phonemic in the sense of human language or something else. There's almost certainly going to be some segmentation and structure that is just emergent from the nature of the communication of the concepts that have to be communicated. You're not going to have long, long passages that only have meaning if you see the whole passage, because it then starts to stress the fundamentals of how you process that information and how you, how you parse it. And so forth. In any case, that it would seem to me would form the basis for interspecies communication at these sort of fundamental levels. Whether or not you can get another species to appreciate, you know, a Beatles song is way, way, way beyond what I would imagine we could accomplish. Although one could dream, I suppose.

Karen Bakker

I mean, Mirjam Knorrchild, who I briefly showed an image of in Berlin, says something very interesting, which is she says, I'm not actually really that interested if bats could speak to me or I could speak to bats. Bats may not recognize me as an individual, an entity that they could communicate with. I'm much more interested in what bats have to say to one another. So that is the attitude of some of the researchers there. And indeed they are looking for the exchange of information that corresponds to the lower levels of Maslow's hierarchy, as you mentioned. But the, but that avoids, I think a really profound question is we're still, we're, we're still making assumptions about, for example, the existence of morphemes or the what, what you mentioned about the length of a particular segment of communication, of course, has been formalized in human languages as a Zipf–Mandelbrot law. That is, the words we use the most are the shortest because it's energetically less expensive to use short rather than long words. Okay, well, the Zipf–Mandelbrot law holds across all human languages as far as we know. Does it hold true across sets of communication patterns for non humans? They have very different neurological processing pathways, maybe much more powerful than our own in the cases of whales, acoustic in information. So some of those assumptions would still need to be tested. So suffice to say, I think your characterization of the interests of researchers in the lower levels of Maslow's hierarchy is correct. And so that thus creates a gap which David Rothenberg referred to, that is, very few researchers today are looking at what would call the higher levels of Maslow's hierarchy with respect to non-human communication. I think that's the next generation of brave researchers who are going to be asking those questions and they will encounter resistance because that gets us closer to those very tricky debates about non-human intelligence and non-human consciousness, which are so explosive. So Pre-tenure researchers find them difficult to navigate.

Steve Crocker

If I might just very briefly, it occurs to me that the closest interspecies communication that comes to mind is between dog and man, dog being man's best friend. And it would be really pretty interesting to know how the dogs conceptualize and understand and what their frame is for the role of a human in their life, we'll stop. Thank you.

Karen Bakker

Thank you. Okay, well, if we have time to get to dogs, I'd love to, because there's some cool things people are doing with haptic technology as well. I see. In the chat, David has said the notion that animals do have culture is pretty well accepted. Well, I think we were having to prove it species by species. We had to prove it with whales. We had to prove it with elephants. We have to prove it with bats. That was something that the bat community, the bat research community, struggled with. So, I would say it's not universally accepted, and it proceeds through the accumulation of a robust amount of empirical field evidence. So, how far across the tree of life culture extends, I would say, is still debated by some scientists. Ronnie, I see your hand up.

Ronnie Schenkein

Yeah, I'm new to this community. I'm a retired veterinarian, not in good health, and I have an important mystery that I don't want to go to my grave without sharing. But basically, the last person who commented said they didn't know if animals could come to appreciate our music. And I have to say that I lived with a parrot for 14 years who definitely had preferences for different kinds of music. But the most intriguing thing is, is that she learned to say things with English that made perfect sense. It was clear to me that she could understand the meaning of some of the vocalizations of wild birds that I was feeding on the other side of the wall from where her cage was. And I don't think we should overlook the possibility of having translators, because parrots have learned to use a great deal of human speech. And most people who studied this know it's not just mimicry. And so if. If we can really pay attention to what they learn and cultivate that level of communication, they can maybe relate to us what they're aware of from other species. How this worked, I don't know, but I have a lot of notes. I'm just looking for somebody that cares about looking into what I experienced.

Karen Bakker

Thank you. And hopefully, I think this community is a really good one to continue that conversation. Diana, I see your hand up.

Diana Reiss

I just had a comment for what Ronnie said. Thanks. Well, Ronnie, you know, a lot of work has been done by lots of people looking at bird cognition. I don't know if you know Irene Pepperberg's work. She's part of our community. Yeah, I don't think Irene's here, but, you know, this idea of having birds and other species translate what others of their own did many, many years ago, Herb Terrace, who was at Columbia University, had a similar thought. It didn't happen in the end, but he thought if he trained a chimpanzee named Nim to communicate in something akin to American Sign Language, then he could take that chimp out and ask it to translate what other chimps were doing, which is a very interesting notion. It unfortunately never happened. But I think that we're in our community now, and what's, I think, so exciting about this is that there are more and more people who want to both decode and find interfaces, and often by finding an interface that can help us decode and because we get glimmers of what they may be doing and we can feed that back into trying to help us decode their forms. So I'll stop at that. I just wanted to comment, just to make sure you knew about Irene's work.

Thanks.

Karen Bakker

Yeah, Irene Pepperberg. Maybe someone can put her name in the chat with one of her books. Okay. In the chat we're still discussing the explosive and controversial nature of concepts of language and culture, which is great. Maybe we'll circle back to that in a moment. But I said, Josh, I think you have your hand up. 

Josh Cowan

Yeah, just a quick question. First off, thank you so much for the book and all you're doing. I've been following you and watching other lectures that you've given.

You know, all of them you keep.

Mentioning as a side about plant communication and plant to plant communication, and yet you never actually say it in and you never talk about it that much. In any of the presentations I've seen, is there anything you can just give me a tidbit on or a place to look or whatever?

Karen Bakker

The two researchers I would refer you to are Monica Gagliano and Heidi Appel. So Monica Gagliano's work, which was very controversial and which Michael Pollan wrote about in the New Yorker a few years ago, studied the capacity of plants, for example, corn seedlings, to respond to sound. And she did – again, transgressive – she took another nays experiment, which is a typical animal, you know, protocol, and applied it to plants. Corn seedlings played the sounds of running water. No running water was present. There was no moisture gradient in the soil. But the corn seedlings grew their roots towards the sound of the running water. Now, there are other experiments she's done and she's doing a bigger project, which I hope you will all find out about on acoustics. She's very interesting. Now in Australia, running a lab for biological intelligence. 

The other researcher Is Heidi Appel, University of Toledo. Some of the work I cite in the book is research on Arabidopsis thaliana, a common model organism used in biology. Very briefly, Heidi did some interesting experiments. She and her students played different sounds to the plants. And what they were measuring was whether or not the plants would release defensive chemicals. So that as it turns out, you know, you can play rain, you can play wind, you can play all sorts of sounds. But the plants only release those defensive chemicals in response to the sound of a predator chewing on leaves. No predator is present, no leaves are being chewed. It's just the sound that they react to. And even more astounding, when she played two sounds of insects chewing on plants. One sound is the insect, which is not a predator. The other sound, which is an insect, which is a predator. The latter provokes the plant to release its defensive chemicals, but not the former. So the plant can derive ecologically meaningful information from sounds at far greater precision than we can. We couldn't distinguish probably between these two insects. Now how is the plant doing it? In the case of Arabidopsis taliana, it has little cilia on its leaves called trichomes. So there's some process. Again, like with the coral larva of mechanoreception, we don't actually know what the then process is beyond that to trigger the biochemical release. But as Monica Gagliano says, it makes sense that plants can hear because biochemical signaling is expensive and slow compared to sound. So if you're evolving in a world with sound, at some point, it would be an adaptive benefit to be able to hear a predator, to sense the sonic information, the acoustic information that reveals the presence of a predator. And again, so, and the presence of cilia gives us a believable mechanism by which they could sense the particle motion in air. So it's like a great discovery behind which lies another mystery. How do they do it? So I go follow up on her work. Maybe we'll have the answer in a few years. 

So Timothy, plants aren't hearing the sound as much as they're feeling it. Now, this leads to a very interesting debate. Peggy Hill, who works on biotremology, argues that vibration is distinct from sound. Tromology is thus a more fundamental science than acoustics. And we often confuse the vibrations with sound. But so if you're interested in that point, Peggy Hill's work on biotremology covers much of the same ground, but she's also looking at, for example, how plants respond to different substrate borne vibrations. We call it sound, usually when it travels through the air. In this case, it was traveling through the air. But plants can also sense ecological information through vibrations traveling through soil, through roots, stems, leaves. There's been some fantastic work done on plants sensing different predators through the vibrations in stems, no sound was made, but they can distinguish whether it's rain falling, a snake approaching, an insect approaching. So, yeah, biotremology. 

Okay, I see two hands up, Renee and Ashley. I actually don't know how long this is supposed to go on for. Also, are we supposed to be finishing at some point, maybe I'll get Ashley and Brute and Renee to ask. Ask both their questions and I'll and then I'll kind of take them together and then maybe get a signal from Kate or Diana as to when we're supposed to wrap up. 

So, Ashley. Yeah, thanks. 

Ashley

I thought this presentation was really interesting. I'm currently a grad student and I just had a thought the work about the coral polyps was really intriguing. I just had a question about the effects of noise pollution. And I know it might be impossible to study, but I wonder what the transition was in the evolution of a lot of animals and plants when the Industrial Revolution kind of took place and things changed across the world. It's kind of interesting. 

Karen Bakker

That's a huge question. Thank you. I'll also ask Renee to ask their question. And Malcolm, so I cannot see everyone who's raised, raising their hands. These little tiles with hands are randomly flashing up in front of me. So if I'm not asking you to ask your question, can you please put it in the chat? Because I can't see everybody. Rene? 

Rene Steinmann

Yes. So the question I had was about. So you talked about these language models, like GPT, and I was wondering how much data is there actually in the bioacoustic community to, let's say, train those language models? Because, I mean, they are based on a huge amount of data. Like, does this amount of data exist? And if it exists, how's the data sharing actually going on? 

Karen Bakker

Yeah, good question. Thank you. And Malcolm, are you still wanting to ask a question?

Malcom

Yes, I am. I'm not a scientist. I'm a reporter by trade now retired. The point you mentioned, about languages, maybe, you know, maybe they sing this song in the spring and this song in the fall. The Navajo lab was a reporter on the Navajo Nation and the Navajo language. I actually wanted to study it before I go out there because it's so different from everything else. But they have words in Navajo that you're not supposed to say during the summer, like bear you can't say shush during the summer because that's when the bear is out and it's disrespectful to call his name and he'll come and get you. But if you say it during the winter, he's hibernating. It's cool, you know, can talk about bear all you want. So, so many things in their, in their culture are mediated by seasonal things like that. It could very well be that. I think in dolphins we're looking at something very complex, something akin to ccholocation hologram, as you said. That's if they want to communicate anything more complex than emoji, basically. And I just think it's going to be a real challenge for us to break down their intelligence and figure it out. A couple of questions. Do you know of any efforts to use the information gathering techniques developed by the DoD and the CIA in the 1970s and 1980s, collectively referred to as remote viewing, to gather information about any biological subjects, particularly dolphins and whales, which are hard to imagine?

Karen Bakker

Okay, so great. Thank you, Malcolm. There's a lot in there. And then just in the interest of time, I'll just touch on a few points made each of you by Ashley and I think Renee and Malcolm. Yes. 

First of all, Malcolm, thank you for mentioning the sort of primacy of Indigenous knowledge. I emphasize in the book that most of the quote unquote discoveries by Western scientists are in fact rediscoveries of ideas long known by Indigenous communities. And much of the first few chapters of the book is an extended discussion of how the Inupiat in Barrow, Alaska, guided bioacousticians like Chris Clark to formulate hypotheses that were then tested and proven to be correct. And so thank you for mentioning that. It's a very, very important insight. 

And that actually links to a little bit to what Ashley asked about. Because, Ashley, you asked about the history of the Industrial Revolution. And putting this in broader context in the book I mentioned, you know, John Burroughs, who's an Anishinaabe legal scholar who writes about the fact that Indigenous communities in Anishinaabe territory say that before colonial settlers came, the plants and animals and other animate beings spoke, but with the coming of colonial settlement, the voices fall silent. And you know, Robin Wall Kimmerer, in her wonderful, wonderful book Braiding Sweetgrass, which, if you haven't read, I would recommend that you all read, talks about, you know, this, this insight that once we all spoke the same language, and yet, and we being a multispecies, “we”, so there's a lot there that I can't answer in the few minutes I have. But I would encourage you all to go read some of these indigenous scholars, like Robin Wall Kimmere, like John Burroughs, like Dylan Robinson, who's wrote this really interesting book called “Hungry Listening”, which also challenges sort of white appropriation or non indigenous appropriation of indigenous knowledge in this regard. These are tricky debates, but then that bridges to your other question, Ashley, about what is it that is diminishing the communicative, if you like, interplay, interchange. And noise pollution, I think is one issue. So many of you probably know about the research on human noise pollution that is growing. That research field is growing fairly quickly. Noise pollution, even in the ambient levels we accept in most urban environments, is associated with cardiovascular risk, heart attack and stroke, cognitive impairment, dementia, developmental delays. We know noise is very bad for humans, but we're only just now realizing how bad it is for non-humans. There was a very important meta review that just came out in the past year about the impact of noise pollution on marine environments, especially with the expansion of seismic exploration and seismic very loud noises in the ocean, which of course hamper reproduction, hamper navigation, but also may kill outright and not only animals, even plants. Marta Solé has done some amazing work on Posidonia oceanica, Mediterranean seagrass, which demonstrates that loud seismic noises can actually essentially kill seagrass, which has been disappearing at an alarming rate and very significant biodiversity, as well as carbon sink consequences. 

Just in the interest of time, I'm going to move on to the question about GPT3 and data sets. Okay, so the evolution of these large language models towards greater ability to translate on the basis of less and less data is pretty astounding. So you know, we've got, and there are lots of different language models, but – And GPT3 is not actually the one that's always used by researchers, it sometimes is – But essentially we are now able to process what are called low shot or zero shot languages where you don't have a lot of data and you don't have a bilingual dictionary. So that gives researchers more hope that on the basis of relatively small data sets of non-human sound, we could derive some meaningful, identify some meaningful patterns, subject to all the caveats we just discussed. Of course, ongoing along that is all this computer vision work that's identifying expressions. Some of you know about the facial recognition technology or computer vision that can identify emotions in a human face. Researchers have done the same thing with mice. There's a really interesting paper where they coded an AI that identifies five different emotions in a mouse: fear, anger, contentment… Probably combining some computer vision data set with the bioacoustics dataset will be what will be useful for some species. But the data sets still need to be cleaned. I don't want to diminish the work that's being done. Manually labeling it's still time consuming and I don't think data sharing is straightforward. There is a real data ethics concern here because a lot of researchers say let's just share the data sets. But what about data privacy? Essentially we're eavesdropping on non humans when we do this work. If they were considered non human persons from a legal perspective, we couldn't just use this data and share it willy nilly. And then of course there's a very important debate about indigenous data sovereignty where indigenous communities argue that the data harvested from their traditional territories actually belongs to them and should not be shared. So it's a very, the whole data piece is very complex ethically as well as sort of technically. I don't know if we're out of time. 

Diana and Kate, let's do a check.

Kate Armstrong

Kate is usually the timekeeper. Kate, what do you think?

Yeah, I mean look, we're at the half past the hour mark. I can see that everybody's still pretty keen to continue chatting. What I will do, I think that we can wrap it up. I think it would be in everyone's best interest that we wrap it pretty soon. But I did want to mention that we will add the recording to our YouTube channel. So everybody who would like to, you know, continue with the references and check all of the amazing. Yeah. Points that Karen has made, you can go back and re watch that. I'll try to do that this week and then the other thing is I will share the chat with everybody because I think that that's been a fantastic. Yeah. Again a huge wealth of knowledge. It's almost a. A reading list right there for everybody to. To get into over the holidays. And of course you can also always join our Slack channel and there you can join a conversation and connect with the other people who are working on similar topics. So I can also send that out to everybody who's attended. I don't know Karen, if you'd like to make any final comments or, or any other links that you wanted to share with the group or…

Karen Bakker

I just shared the link to the soundsoflife.org in the chat so people can go find more examples and then that will lead you to the Smart Earth project, and on the Smart Earth Project we actually have a searchable data set and lots of examples of this broader research journey about the intersection of digital transformation and environmental change and environmental governance. We didn't even get to talk today about the conservation applications because a lot of these tools are being mobilized right now, to help protect endangered species. So maybe that's another talk for another day.

Diana Reiss

I was just going to say that I think we're going to absolutely have to get you back, Karen, to give that second talk. And I just wanted to thank you on behalf of all of us here and our Interspecies Internet board for doing an exquisite talk. And it was really not only mind opening, but ear opening. Okay? And perhaps we'll be even better listeners! 

One of the things that just struck me is that it seems that there are so many patterns that you talked about that connect us now with the rest of the animal world. This idea that we and others are listening and we're using that information in an adaptive way and maybe that's the basis of so much of, you know, of what we're going to find and we think about all these other senses that we may be using in combination. So, I thank everybody for joining us and for such amazing questions and we look forward to getting you back again, Karen. Thank you so much everybody. 

Have great holidays. Safe and happy holidays to everybody. Bye.

Kate Armstrong

Thank you everyone.

Karen Bakker

Take care. 

Next
Next

Transcript 7 | Can Machines Learn How to Behave? with Blaise Agüera y Arcas