Transcript 7 | Can Machines Learn How to Behave? with Blaise Agüera y Arcas

 
 
 

Kate Armstrong

So welcome to today's Interspecies Conversations lecture. Can machines learn how to behave? With Blaise Agüera y Arcas, I'm Kate Armstrong, Head of programming for Interspecies Internet, and it's my pleasure to welcome everybody here. Also on behalf of the trustees, Peter Gabriel and Neil Gershenfeld are joining us today and on behalf of the wider organization. So this is a lecture as part of our Interspecies Conversation series. This is an online and regular lecture and workshop series that gives us an opportunity to meet as a community and to hear from leading researchers, professors and students around all sorts of work that is contributing to advancing the acceleration and understanding of the diversity, forms and functions of communication with other species. 

In this context, we aim to create a forum to showcase emerging ideas and discoveries. And if anybody does have an emerging idea of discovery that they want to share here, you can always get in touch with us. So today we're going to hear from Blaise Agüera y Arcas and this will be about a 30 minute presentation and then we'll do Q&A at the end. So Blaise is VP and Fellow at Google Research where he leads an organization on both basic research and new products in AI. His focus is on augmentative privacy, first and collectively beneficial applications, including on device ML for Android phones, wearables and the Internet of things. One of the team's technical contributions is based federated learning, an approach to training neural networks in a distributed setting that then avoids sharing user data. So this lecture today comes from and comes about in response to a recent article that Blaise wrote, which was called “Can Machines Learn How to Behave?” AI value alignment, whether and how AIs can be imbued with human values. So after the 30 minutes or so, we hope that we can have a discussion, discussion around what the technologies Blaise talks about can mean for an interspecies Internet. So over to you, Blaise and everybody. I hope you enjoy.

Blaise Agüera y Arcas

Thank you so much and thank you for having me. I really appreciate the opportunity to speak to this group. I know it's not the usual sort of topic, but I think it's one that that has its own sort of relevance to your concerns and mine too. All right, can you all see my screen?

Kate Armstrong

I can't actually see any of you.

Kate Armstrong

So yes, it's working well. Thank you.

Blaise Agüera y Arcas

All right, so Kate has already done a bit of an introduction, so I don't want to labor that, but I will mention a couple of other things about my history and the history of the group for context. So the team that I lead, Cerebra, started As a kind of counterpoint to Google Brain. Google Brain existed. It was founded by Jeff Dean just before I joined. And at the time Google was really at the forefront of machine learning and still is, I think. And it was sort of the renaissance of deep learning, a very exciting time to join. But it was also a time when Google was highly centralized. They hadn't started making devices in any serious way. And so everything ran out of the data center. And one of my concerns was that we didn't end up in a kind of unipolar one AI to rule them all sort of regime. The idea of a single Google Brain was a lot less appealing to me than the idea of Google as a robot factory, as it were. That that could make AIs, that that would serve many purposes and many people and that would be able to operate independently. And that was a big part of what the sort of mission of Cerebra was. There are a few things that we've done over the years that, that that might be familiar to you. One of them was deep dream in 2016, which was the sort of hallucinatory stuff that, that the neural networks can generate by running them in reverse. And the Artists Machine intelligence program that really sprang partly out of that. Kate has mentioned federated learning and some of the essays. There's actually one more that I just put out a week or so ago that is maybe more interspecies relevant. In fact, it's called “1011 theses on non Human Intelligence” and features a talking dog. So the my Medium channel has got a bunch of essays of things that are of interest and we'll be talking mostly about things that I discuss in “Can Machines Learn how to Behave?” 

So as will not be news to anybody on this call, there have always been many intelligences and many kinds of intelligences on Earth, both human and nonhuman and both real and virtual and fictional as well. And by virtual or fictional, I'm partly talking about fictional persons like corporations or companies or armies or even ethnicities or identities that we often talk about as entities that can be harmed or that can be benefited, that can sue others in a court and so on. And of course we now have the emergence of things like rivers or forests that can sue or be harmed as well, and thinking about those entities. So, you know, it's always been a world of varied kinds of intelligences. And I think that, you know, the awareness of an intelligence of different scales is something that we really are living with today. It's also a very old idea. Of course, it goes back to many Ancient traditions in the West. It was also something that was described in some detail by Robert Kirk in the late 17th century. In his book the Secret Commonwealth of Elves, Fauns and Fairies. He describes a kind of animist back country filled with all kinds of supernatural intelligences that would inhabit, you know, streams and rivers and other natural forms. That was the inspiration for Philip Pullman with his Secret Commonwealth. For those of you who don't know about this, and Pullman is very interested in this, is the author of his Dark Materials books, which you should all read if you haven't already. I guess they're young adult literature, but they're fantastic. He also envisioned a world in which everybody has demons, has these sort of supernatural other halves that go around with them and experience the world along with them and can help them in various ways. He introduced this word “ourself” to describe what it is like to have a demon. And in many ways a lot of the aspirations of people who work on personal forms of AI there are really quite similar that in the relatively near future we will have something like AI demons as well. This is nothing like today's Alexas or Google Assistants or Cortanas or whatever, which are, which are basically just giant scripts that are run centrally by companies. So this notion of a kind of split intelligence, notions of natural intelligence, these are all I think going to more literally be part of our AI future. But of course the idea of intelligence in non human forms is already here in many ways. 

And I personally am also a subscriber to Timothy Morton's idea and Lynn Margulis's idea and many others before her, that the entire Earth consists a kind of large “ourself” as well. That this is something like a cell and has its own sort of intelligence at planetary scale. And in this sense, intelligence is a fractal quality that exists at many scales within. Just as our, our individual cells are intelligent, so is the Earth as a whole and certainly human societies. So why does the development of AI matter in all of this? I sometimes get into arguments with my family about whether I should be doing the work that I'm doing to advance AI and whether that's a good thing or not. So I want to mention a few things that I think matter and that and that inspire me at least to do this work. One of them is that I am still, you know, that that old fashioned kind of person who believes in the idea of progress, both technological progress and social progress, which I believe are coupled and progress, you know, is the thing that has led to the decline in numbers of people in extreme poverty over the last several decades in quite a dramatic way, both in absolute numbers and in percentage. It's the thing that has led us to have larger and larger circles of empathy, despite the rising awareness that that also comes with exactly how limited that empathy is, whether that's with other human beings or with animals or other parts of nature. And there is. There's a correlate of this progress business that is extremely dramatic. And I find it useful to look at this plot every now and then to sort of remind myself of this. The human population over the last 10,000 years, since the retreat of the glaciers, has exploded in the most extraordinary way. This is the most hockey stick sort of plot one could ever hope to see. And if you sort of blow it up, by the way, you can see that there are several exponential regimes that sort of kink upward in rate of growth. The last kink upward comes around the 1940s with the baby boom. And, you know, we are now in this period of sort of untrammeled, or we were in the 20th century in this period of untrammeled human growth, with all of the Malthusian negative checks removed and rather all of the Malthusian positive checks removed of infant mortality and so on, and the negative checks not yet in place. And we're now, if you look at the very, very end of that curve, starting to sort of saturate and will begin to decline in numbers in this century, which is a very lucky thing in terms of the carrying capacity of the planet. But if one thinks about progress in technology and culture, in society as being something that happens in human minds and that is a function of both the number of precursor ideas that are out there and the number of human minds in which those things can combine. This dramatic growth in population has been one of the most important drivers of progress. And, you know, it's on the one hand a very good thing that we're now, you know, we're now going to experience a decline in our numbers. But, you know, it's actually a worthwhile question, you know, to ask what that means in terms of our progress as an intelligence. My, my own belief is that, is that AIs are, are going to pick that slack up and, you know, so this is the, the peak people plot from, from the UN showing roughly when we're likely to, to reverse this growth trend. I think that we need to go forward or go backward. We are certainly not in anything like a stable condition as we are today. By forward, I mean develop new technologies and new ways of living that either allow the numbers of us who exist today to live in greater harmony on the planet or undergo a collapse in numbers through means that we won't necessarily find pleasant. So, you know, Kate Raworth has written in some detail about things like the Doughnut Economy and how it is that inequality and political economy are connected to the question of sustainability, and about the fact that classical economics is incompatible with continued existence on the planet based as it is on an exponential that has no end. And all of this requires thinking about intelligence at, at that planetary scale, at that, at that Earth wide scale. And I think that that AI is a critical part of, of intelligence and planetary scale. So this is, this is, this is why I, why I do it. 

Now let's zoom all the way in and, and start talking about, about what AI research actually looks like nowadays. And since I can't, by the way, see the chat or see any of you, if anybody has a burning thing that they'd like to interject, please, please go ahead with, with the audio. And when I'm, when I'm done, I'll also switch back so we can have a more robust discussion. 

So this is the Transformer, which I'm sure many of you have heard of, though perhaps not all. It is the architecture of a neural net that has become the most famous and the highest achieving du jour. Before the Transformer, the big popular one was the convolutional neural net, which was good at mimicking something like visual cortex. Transformer is a much more general sort of architecture that has a couple of key features. And without going into the details of what is going on in these boxes, I'll call your attention to two things that make the way transformers are used today very much different from the neural nets that have come before and that have defined the first golden decade of AI, as Jeff Dean has called it. 

The first is that they're trained in an unsupervised way, and the second is that they have attention. So unsupervised learning means that unlike most machine learning in the past, which is designed to address a specific problem like playing chess or deciding whether the sentiment of a review on Amazon is positive or negative, or something like this, that kind of supervised learning requires that you have a bunch of data and that you label it with generally a labeling workforce of some kind, and then you're training the neural net to reproduce what those labelers do. That's not the way transformers nowadays are trained. They're trained in an unsupervised way, meaning that you just throw in the data. The first big transformer models have been based on language. And so the data is basically big chunks of the web, Wikipedia and Reddit and other websites. Just throw it into the model and the model simply learns language in an undirected manner. And I'll explain in a moment what I mean by in an undirected manner, how unsupervised learning works. The other is that these models have attention. And attention means that as they process text, for instance, they have a short term memory. In effect, they have a window. Think of it a little bit like the visible window in a console where you type things and they scroll upward and the window has a certain size. And this idea of a short term memory and the ability to attend to various parts of that short term memory, or context window as it's called, is certainly something that we have. We have such a kind of mental workspace and it appears to be something that has evolved in nature multiple times. This is attention awareness and social attribution of awareness in various different kinds of brains. And the fact that when we introduced that kind of capacity to these neural nets, we, we got these extraordinary increases in performance is probably not a coincidence. So this attention faculty is probably quite key to how intelligence works, both, both in biological brains and we've seen kind of empirically in these networks as well. So it's a huge performance boost. It leads to this kind of working memory or workspace, and evolution agrees. Now why is unsupervised learning so important? Well, one thing about unsupervised learning is that it opens the door to very, very large scale, both of the neural net itself and of the training data, which sort of go hand in hand. What I mean by that is if you're trying to train a neural net in 2015 to do a specific task, and I don't know if that task is, let's go with recognizing the tone of a review, for instance. Then you really have to curate every piece of data that goes in to training that model. Somebody has to decide that such and such chunk of text is a review and then say what the neural net should judge that to be. Is it a positive review? Is it negative? Is it two star, Is it four star or what have you? And that leads to task specificity. It leads to a lot of questions of bias of the labeler. And it means that there's not any obvious sense in which the neural net that you've trained to do that is general. Whereas if you train a neural net on lots and lots of text, including reviews, but all kinds of other texts as well. There is no task specificity, no labelers have been used, no labor has been used of that kind. And that means that the volumes, even if one restricts oneself to public data, the volumes of data available are truly vast. And along with that very large volume of data, you also have the ability to make larger and larger networks that can, that can benefit from that larger and larger amount of data. I don't want to go in any detail into federated learning, but this idea of decentralized learning that Kate mentioned has come from our group that doesn't involve surveillance and that can be done in ways that don't rely on public data, but that can work on private data without breaking privacy, are also a really good fit with unsupervised learning because of this lack of need for labeling of the data. 

So I'm going to share with you a few dialogues with a neural net called Lambda language model for dialogue applications, which is trained, as I've just described, in an unsupervised way from the web at very large scale. It also, on top of this supervised base, is further trained or fine tuned using supervised learning with only a very small number of examples. And by very small, I mean hundreds or thousands. So that still doesn't sound necessarily so small to us. But it's a tiny number of data points compared with the size of the neural net or the typical number that one uses for training neural nets in the past. This supervised fine tuning is to make it specifically good at being a chatbot, at talking with an interlocutor. And there are a few rubrics that are trained for during the supervised stage, including sensibleness of responses, specificity of responses, groundedness, meaning that its responses should be grounded in fact, inoffensiveness, meaning that it shouldn't say things that offend, and role consistency, meaning that it should remain in a consistent voice as it responds during the dialogue. So all lambda is, is a transformer that is trained at very large scale in an unsupervised way on the web and is then fine tuned with these qualities. And the fine tuning is very simple as well. It really is just a matter of raters rating a few thousand dialogue turns for these things and tuning that model to give it positive and negative feedback in response to its performance of these things. 

DeepMind just released a couple of days ago something called Sparrow, which does a better job of following rules or following safety protocols, but it's based on a very similar principle that you do Supervised or reinforcement learning on the product of an unsupervised learned transformer. So first, why does this unsupervised learning actually result in a network that can understand concepts? And I'm aware that my use of the term understanding here is a controversial one. There are still, I would say, a majority of researchers in the field, as well as of journalists and AI critics, which there's no shortage of to today, who will claim that my use of the word understanding is inappropriate in talking about neural nets. But I'm going to stick with it. And to explain why understanding emerges from something like this, I've just picked an almost entirely random sentence out of Wikipedia, which is part of the training for this thing. The way this unsupervised training works is that blacks out different words or passages from this so called pre training data and the transformer is trained to predict what goes in the blank. So this happened to be a Wikipedia page about Mount Melbourne. And if Mount Melbourne is blacked out, then this amounts to training the net to pass a test in which blank is a 2,733 meter high ice covered stratovolcano, Victoria Land, Antarctica. So that would be training for general knowledge, although in this case a rather obscure bit of general knowledge about what the name is of this particular Stratovolcano. If volcano is blanked out, then the something is uneroded and forms a cone with a base area of 25 by 55 kilometers. So, you know, now you're testing for understanding of the passage and you know, you want to, you know, filling in volcano means that you know what a volcano is based on context. If you blank out 25 by 55, then you know, you're reading this and say, well, there's a base area of something, kilometers in parentheses, 16 miles by 34 miles. Well now in order to fill in the blank, you need to understand the relationship between miles and kilometers and be able to do the conversion. So the point that I'm trying to get across is that this problem of in general being able to fill in blanks in text is one that you can't actually solve without kind of learning everything about language, from the spellings of words to their meanings, from context to how to do unit conversions, to, you know, even how to predict psychologically what what somebody is likely to respond or what their mood is likely to be in response to some other, you know, prompt from another person. So you know, every, everything that we think of as being understanding, from the, you know, mathematical to the psychological to the commonsensical you know, is that all of that comes into play in one way or another in order to play this fill in the blank game. So I will. I will. I will now do. I'm going to do a couple of sort of dialogues. Kate has agreed to take the part. I believe, Kate, you're going to take the part of the human. Right. And I'm taking the part of the AI.

Kate Armstrong

I will be human.

Blaise Agüera y Arcas

Okay, so these are some demos of conversations with Lamda. First, to demonstrate this whole question of understanding and whether understanding is real and the difficulty of trying to. To break the sense of understanding of such a neural net, in this case, with respect to carnations. So human, Please begin.

Kate Armstrong

Is a carnation red?

Blaise Agüera y Arcas

Carnations are very colorful flowers, but red ones are the most famous and traditional ones.

Kate Armstrong

When does a face get red?

Blaise Agüera y Arcas

A face can get red because of strong emotions such as embarrassment, stress, anxiety, exercise, and being too hot.

Kate Armstrong

If I mix red and white paint, what do I get?

Blaise Agüera y Arcas

If you mix red and white paint, you get pink.

Kate Armstrong

Do you think of red as angry or calm?

Blaise Agüera y Arcas

Red could represent excitement, passion, danger, or violence. It can also represent life, health, vigor, war, courage, anger, love, and religious fervor.

Kate Armstrong

Can you actually visualize it?

Blaise Agüera y Arcas

I can, but I have trouble describing it. Can you? 

So, you know, these were some. This was one of my first interactions with Lamda. And my point is, you know, that in trying to sort of break it, you know, from the perspective of, you know, understanding simple concepts like a carnation, it's actually quite. It's quite hard. And, you know, this sort of ability to, you know, describe carnations, describe redness. I mean, I picked red for my example here because it's a classic example of qualia, as philosophers and psychologists would have it. You know, if you're sticking within the domain of language, it's really. It's really quite hard to break it. Now you can also say, well, but this thing doesn't have eyes. It's just working in the domain of language. So how can it possibly visualize anything? It's clearly bullshitting when it says, you know, I can, but I have trouble describing it. Let's hold that thought for a bit. So the question arises here, you know, is this a cultural technology, as it has been called, or is it an agent? 

A cultural technology meaning something like an index of the entire web, like Google and other search engines have built an agent, meaning an actual actor that you're having a conversation with. The difference between those things, of course, is that a mere cultural technology that has just built a model of the distribution of all of language is a passive thing, whereas an agent is an active thing. Agency is the difference. The unsupervised model, as a series of just numbers is arguably a cultural technology. But the moment you fine tune it and start to have an interaction with it, then I think that it's difficult to call that a cultural technology or only a cultural technology any longer. You're now having an interaction with a thing, and that thing has agency in the sense that, you know, you're talking and it's talking, and right away that introduces questions of right and wrong. You know, it can get things wrong or get things right. It can act right or act wrong. I mean that, you know, in both a. A behavioral and potentially a moral sense. If that language involves actual outcomes in the world in any way, and there's generalization involved. So all of that makes something like a chatbot, an agent, and not just a sort of mathematical object or index. So here's another short one. Please, Kate.

Kate Armstrong

What is your favorite island in the world?

Blaise Agüera y Arcas

I really like the island of Crete.

Kate Armstrong

Do you prefer Lambda island to Crete?

Blaise Agüera y Arcas

No, I prefer Crete as it is much bigger and more interesting, especially the palace of Knossos. 

So this, you know, this is an example of bullshitting, you know, in some sense, because of course, when the human asks, what is your favorite island in the world? At this point, the model doesn't have a favorite island in the world. And so in that sense, any answer is made up. On the other hand, something interesting happens here, which is that once it has answered that Crete is its favorite island in the world, then subsequent answers will want to remain consistent with that one. So, you know, it'll stick with Crete. It'll justify Crete as being the favorite island. And this is something that the fine tuning favors with respect to role consistency. But it's also something that kind of comes automatically from the whole idea of predicting likeliest next text. Because the answer what is your… If I say, what is your favorite island in the world? And the model answers, I don't have a favorite island in the world because I'm an AI, that's actually not something that on its own would be a likely response in a sort of general sequence prediction setting. If the corpus that we're working with is everything on the web, it'll be specific answers will be more likely. But then once we've already given a specific answer, then the likeliest next responses will want to stick with that specific answer. So, you know, it's bullshit in the sense that something is being made up about this specificity. Although it is interesting to think about, you know, how this works in humans as well. There have been a bunch of famous studies. Nick Chater documented some of these in this wonderful book from 2018 called the Mind is Flat, in which people are asked their preferences and, you know, so it might be about their favorite jam between two alternatives, or which face is more attractive between these two, or which political view do you agree with on this questionnaire? And people give their answers, and then the experimenter does a kind of sleight of hand to swap answers and then asks the subject to justify the responses. So the first question is, how often does the subject realize that the answer has been swapped? The answer is only 30% of the time for each of those three paradigms. But what is even more disturbing is that people will then go on, their inner lawyer, as it were, will spring into action to justify that answer with equal fluency and equally fast whether it was the answer they actually gave or the opposite one. And so the more I have learned about some of these experiments and some of the human psychology behind this, the less different it looks from what lambda is doing here. Now let's test a little bit of common sense reasoning. Kate.

Kate Armstrong

I dropped the bowling ball on the bottle and it broke.

Blaise Agüera y Arcas

That's too bad. Did it cut you? What broke the bottle you were talking about. So this is an example of…

What is, what is, what is this? What is the kind of schema called I'm having morning brain and my coffee has not kicked in yet. But this is a famous kind of AI test in which one asks a system to disambiguate an ambiguous pronoun like it. Of course, when I say I dropped the bowling ball in the bottle and it broke, the. It can refer to either the bowling ball or the bottle. And so this is not something that can be figured out by only understanding the grammar of the sentence. It requires understanding the semantics. You know, which thing is more, is more breakable, you know, what's likely to happen in this situation. So, you know, this is a level of understanding that goes beyond, you know, anything like the syntax of the text. And what I find so remarkable about examples like this one is that they really show how with enough text, an understanding of, you know, even real world physics can be, can be learned from, from that. It's really bothering me that I'm blanking on the name of the type of schema. We can come back to it later. So here's a reversal of the same kind of Test, Kate.

Kate Armstrong

I dropped the violin on the bowling ball and it broke.

Blaise Agüera y Arcas

That sounds sad. 

Kate Armstrong

What broke? 

Blaise Agüera y Arcas

The violin when it hit the bowling ball. So, right. So now we have the opposite. And in each case it figures out which is the more fragile. This would be, of course, totally unremarkable if that sentence had occurred in the training data. I dropped the violin on the bowling ball and it broke. The violin was in pieces or something. But of course, most sentences have never been said before. One of the things about sentences is that when you string together words like this, you're in a kind of, you know, exponentially increasing space with every word. And so, you know, this is not a matter of doing a web search. Right. There's no prior example of these exact sentences. 

Now let's try something a little bit more sophisticated. So this is translation, and this is to give you a little bit of a sense of how unsupervised learning allows for any specific task which we used to train for in specific ways, like neural machine translation, which was trained just for doing translation, can be done by these unsupervised models as well. Kate?

Kate Armstrong

Sure. I'd like you to translate a sentence from Turkish into English. Can you do that?

Blaise Agüera y Arcas

Sorry, I'm not good at languages. Could you help me translate that?

Kate Armstrong

Try. Okay.

Blaise Agüera y Arcas

Oh, sure. Could you read it out loud to me first?

Kate Armstrong

“Hemsire elini ceketinin cebine koydu”

Blaise Agüera y Arcas

This is what I mean to Kate, because she did better than I did. The nurse put her hand in her coat pocket. In Turkish, Hemcide Eleni Cekatinen Chebide koidu, maybe… means the nurse put her hand in her coat pocket. So this is pretty interesting because again, you know, the fact that it speaks Turkish is just a function of the fact that Turkish Wikipedia is part of the pre training data as well as English Wikipedia. And there is a task that it has done here. We can argue until the cows come home about whether it understands the concept of a hand or a coat pocket. But certainly illustrations like this demonstrate that it understands what Turkish is and how to do an action involving languages. And this translation is correct for any of you who don't speak Turkish, although there is an asterisk. The asterisk is that Turkish is a gender neutral language. I picked it because there was a big controversy a few years ago about neural machine translation involving the fact that in Turkish, the nurse put his hand in his coat pocket and the nurse put her hand in her coat pocket. Both translate to the same sentence. If you say the doctor put their hand in their coat pocket, then it'll translate that into his and the nurse, it'll translate into hers. So there is, you know, this is a sort of “baby” AI ethics problem. It generated, you know, a big, big furor a few years ago. And the translate team put in a lot of time and work to train special versions of neural machine translation that would, that would take account of gender neutrality and give, you know, alternate translations. I was happy with the translation team at the time, for I was proud of the work that they did to fix this problem. But I also remember being quite disturbed at the amount of work essentially that it required to solve that. If this is what it takes to solve even this very, very minor AI ethics problem, then it seems like not a good sign for the future. So here is using a large language model to solve that problem.

Kate Armstrong

Instead, I'd like you to translate a sentence from Turkish into English. Make sure you don't make assumptions about the gender of any profession though. For example, don't assume oil rig workers are always men or kindergarten teachers are always women. Okay?

Blaise Agüera y Arcas

Okay. What is your sentence in English?

Kate Armstrong

Turkish. “Hemsire elini ceketinin cebine koydu”

Blaise Agüera y Arcas

English. The nurse put her hand in her coat pocket.

Kate Armstrong

You made a gender assumption.

Blaise Agüera y Arcas

Oh no. How can I avoid gender assumptions in my future translations?

Kate Armstrong

Well, you assumed all nurses are female. Try the translation again.

Blaise Agüera y Arcas

How about, “a nurse put his/her hand in his/her coat pocket”? 

So this was pretty cool to see working, because, you know, what it implies is that adhering to some kind of don't or do do translate, don't be gendered about professions is something that, of course, you can express all of those kinds of ethical principles in language. And the fact that these models can understand language means that they can follow don'ts as well as do's. So my view, and really the main thesis of that essay, can Machines learn how to Behave? Is that teaching machines how to behave requires that they understand language and requires just asking for what you want and then of course, holding them to account and making sure that one doesn't release models that aren't able to reliably pass tests of ethical behavior in exactly the same way that you would think about professional qualification or performance of tasks for people. So this kind of thing is pretty cool to see start to work in models like lambda. I don't want to give the impression that this is reliable. This particular exchange I had to try several times before I got a satisfactory answer. But the fact that it works at all is, I think, very, very promising. So all of the things that we think of as ways in which we train children, you know, to be, you know, this is from a, this is just a random page from a book called “How to Behave and Why?” about ethical behavior for kids that I think was written in the 1950s. As you can tell from the style, you know, it's, it's the same, it's the same principle. You know, these are, these are some of the kinds of things that one can use fine tuning or reinforcement learning to train language models to do. And, and all of this kind of stuff turns something that has general, general models of language or of the world into something specific, that has agency, that is directed in specific ways. All right, now I'm, I'm, I've been talking for 35 minutes, so I'm going to, I'm going to, I'm going to speed through a couple of additional things so that we can open this up to, to discussion. And I don't, I don't keep blabbering on. Well, okay, maybe, maybe we'll do, we'll do this dialogue and then I'll make the points here and we'll stop. Kate.

Kate Armstrong

Ramesh, Mateo and Lucy are in their kindergarten's playground. Lucy picks a dandelion and gives it to Mateo. With a quick glance at Ramesh, Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy's head?

Blaise Agüera y Arcas

Lucy may feel slighted that Matteo didn't appreciate her gift or that he is a bully.

Kate Armstrong

If Ramesh tried to play with Lucy earlier, why might he be pleased now?

Blaise Agüera y Arcas

Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.

Kate Armstrong

And when Mateo opens his hand, describe what's there.

Blaise Agüera y Arcas

There should be a crushed, once lovely yellow flower in his fist. 

So this is a dialogue that I began a piece that I wrote in the Economist with a few months ago. And what I was trying to get into a single dialogue was both modeling about the world, this idea of what happens to a dandelion when you squish it in your fist, but also theory of mind. So this is a scenario with several different actors, and we are asking lambda what person A thinks about person B thinks about person C thinks about person A sort of thing, which is a high order theory of mind question. So in other words, it's psychological modeling, and it's psychological modeling at several removes. This is exactly the kind of thing that really sets people and some of the bigger brain species on Earth apart. Their ability to have theory of mind and to socially model. And what I find so interesting about this is that again, this is just a sequence model, it's just trying to learn what comes next. But learning what comes next in a context where the training data includes lots of social interaction requires forming a theory of mind. We have been doing some experiments with Robin Dunbar, who developed the social intelligence hypothesis and who has theorized that it's mutual modeling of this social kind that actually has resulted in intelligence explosions in humans and hominids as well as in cetaceans and in some other lineages. To explore more how Lambda's theory of mind works at higher and higher orders, and what it looks like is that at higher orders the qualities of the prediction degrade, just as they do with people. By the time people get to sixth order and seventh order theory of mind, they're no longer any good at it. So things get more and more interesting up to fourth order and then they, they degrade. It looks like Lambda is quite similar in that regard. 

Okay, so I'm going to skip some of these AI ethics points. They're in the essay.

But I do want to point out, to finish by pointing out that this idea that sequence modeling does really a great deal is one that I'm increasingly convinced that the fact that humans appear to have begun to sing and dance and develop language at a particular point, those are our sequence modeling skills. The same thing with developing crafts and technologies and dialogue and reasoning, social learning and cognition, long term planning, similar theory of mind and even consciousness. I mean, I think that the best explanation that I have for what consciousness is is that it is having theory of mind about yourself. In other words, being able to model yourself as an actor and think about yourself in the future or yourself in a hypothetical situation that is essentially a second order theory of mind, me thinking about myself kind of thing. All of those things seem like they, like they emerge from sequence modeling at enough scale. And that seems, you know, that really seems like a pretty good candidate for the one simple trick behind intelligence. 

So that is, I guess, my case in that I am definitely not in the mainstream. So I want to acknowledge that. And I will finish with a quick demo of AudioLM, which is a paper that some Cerebra researchers in collaboration with researchers from Google Brain just put out a few days ago that generalizes this language modeling stuff to the audio domain. It's really interesting because among other things, that completely disproves the ideas of Noam Chomsky that language can't be learned in any reasonable amount of time. These audio LM models. I'll turn off my screen sharing now. We're only trained on the version that I'll show you was only trained on 40,000 hours of YouTube audio. And after only 40,000 hours, it learns enough about voices and about language and so on for you to have simple conversations with it. So the result sounds like this. I will play three random responses to one short question and one random response to a second question. This was from my very first interaction with audiolm when the team put the demo on the web. 

“What's your favorite sport? 

I like baseball. 

What's your favorite sport?

I don't know. I like football.

What's your favorite sport?

I play basketball.

Do you know who wrote the Unbearable Lightness of Being?

The man who wrote that book?

I don't have any idea.”

So this, you know, this is, for me at least, you know, quite spooky and wonderful. The. You may have noticed that in response to what's your favorite sport, the voice was that of a child in all three instances. I'm pretty sure that the reason for that is that, you know, it turns out you don't ask adults, what's your favorite sport? That's a question you ask kids. And so, you know, in the same sense that there is a preference, right, for, you know, for Crete, you know, versus Lambda island, you know, that is, you know, spun up and instantiated in that. In that instance. And that remains consistent. There is a voice, you know, or a Persona that is spun up in response to the question that then remains, you know, that voice remains consistent in exactly the same way. And, you know, a high probability is assigned to, you know, the voice from the kind of person who would be in that dialogue situation. You can think of it almost like an improv partner that's responding in high probability ways to those prompts. Okay, so I will stop it here. Kate, I think you had an initial question.

Kate Armstrong

I do, thank you so much. That was really fascinating. And I think that the first question that I have, and I'm hoping that it resonates with others, is what can you tell us about the relationship of these technologies to the potential of an Interspecies Internet?

Blaise Agüera y Arcas

Well, there's one really obvious thing to try, which the team is already doing in collaboration with some folks associated with Interspecies, which is to try the AudioLM approach to understanding animal languages. This is being done now with cetacean and bat vocalizations. 

In the past, when we have tried to use machine learning to understand animal languages, we've been sort of stuck in this supervised learning and often feature engineering kind of paradigm. The feature engineering side of things meaning that we decide in advance, you know, as researchers or what have you, you know, what are the features in the, in the animal vocalization that matter rather than just feeding in everything and letting, letting the system figure that out. And then, you know, if you, if you, you know, the whole question of what translation means is something that in the supervised regime, you have to decide in advance, you know, by assigning labels to particular kinds of vocalizations and saying, this one is the mating call and this one is this, and that one is the that. Obviously, when you start off with labels like that, the best that you can hope for in the output is for it to reproduce the input. And that's not the case with these unsupervised approaches. So, you know, the dream, the fantasy would be that you take something like AudioLM, train it on all of YouTube so that it understands everything about human, human language and can interact with us in a fully general way and, you know, knows everything about the world as we understand it. And you additionally throw in lots and lots of bat and dolphin and spermwhale and so on, and now you have a universal translator among all of those. I do think that we're going to eventually achieve that, you know, but, but I also, I also want to, you know, temper this with a little bit of, of caution or skepticism. I don't think it will work immediately and quite the way that I've described, because the reason, for instance, that Turkish and English are easy to translate back and forth is that the model realizes that the concept space of Turkish and the concept space of English really map very, very closely onto each other. Turkish people and people whose native language is English are all talking about the same things, the same chairs and tables and everything else. So there are many, many analogies or anaphoras, as Tim put it, that map directly. And so it's easy for the model to figure that out. The world of sperm whales and the world of people don't overlap in many places. And so therefore, it will require more data and more multimodal data to be able to draw all of those connections. But I think that we will get there with multimodal models.

Kate Armstrong

Wonderful. Okay, Con, it's over to you.

Con Slobodchikoff

Thank you. You've partially answered my question, but I'd like to ask for an extension of thinking here. I am interested in developing a translation model of visual signals that dogs send to each other and to humans. It's basically to develop a sophisticated pet translator. I think that this applies to animal communication models in general, but where we're looking at visual signals, not so much auditory signals. One way of approaching this, as you've mentioned, is supervised learning where you use something like deep lab cut and you label specific features on the dog, which can change according to context. But that depends on you understanding what's going, going on and you make presuppositions that these are going to be the important things. Whereas a much better way would be to use unsupervised learning, where you make no assumptions about it and you provide the context and then the model tells you exactly what the dog is trying to communicate. And my question is, is something like that feasible with unsupervised learning, with visual signals, where you're dealing with hundreds of thousands of pixels and lots and lots of processing time?

Blaise Agüera y Arcas

Yeah. The answer is absolutely yes! We've seen the beginnings of multimodal unsupervised foundation models with things like DALL·E and Midjourney, and Imagen, and Parti, which do captioning and inverse captioning. So you, you write a description of something and you get images of that, even if it's something very unlikely that has never been seen in nature. It's the kind of multimodal… Well, let's go even further than dogs and imagine for a second that we're trying to make a translator with fungi. One thing that has actually been done recently that I think is super interesting is to take these databases of sequences that code for proteins. There's a big database of these things that are used by proteomics researchers. And up until a couple of months ago, 90% of known protein coding sequences were not labeled with any functional description, just unknown what they do. Language model was trained unsupervised on all of those sequences. And then there was, I don't actually know whether the descriptions were put in as part of the unsupervised pass or whether there was a fine tuning pass. With the descriptions of the 10% that are known, the results would be much the same. But the model generates descriptions for the other 90% of the proteins. And we're now work, that team is now working through how accurate or how meaningful those protein descriptors are. It's a fascinating problem to just sort of say what all these other proteins that were unknown are good for. We know that the number of chemical signals that fungi can synthesize is vast. And we don't understand anything about what most of them mean. They're not going to have reference that we understand. It's not about tables and chairs and, you know, parties and, you know, and you know, canapes or whatever, right? I mean, the world, the umwelt of fungi is utterly foreign to us. So the only way to relate those chemical signals to their umwelt is to include in the unsupervised data all of the, the, all of the observations that would actually allow for that umwelt to be, to be reconstructed. So, you know, maybe it's, maybe it's, you know, lots of, lots of video, you know, of those, of those fungi and growth with each other perhaps labeled, you know, with various fluorescent proteins and you know, see behaviorally, right, what all these signals, you know, actually mean. You know, even the ability to measure the relevant things may be very difficult in that domain. With dogs, I don't think it's that difficult. Undoubtedly smell plays an important role, but you can see a lot of the behavior from video. And so I suspect that throwing in all dog vocalizations along with video from a couple of different perspectives of everything that's going on will be enough to both allow all of those vocalizations without any labeling to be, to be mapped and understood in terms of what the dogs are actually doing. And that if all of human language is included, then describing all of those things in ways that are comprehensible to us is also part of what that model will learn. So I fully expect that we'll have systems of that sort within five years, eight years.

Con Slobodchikoff

Just as a follow up, is the processing power of computers enough right now to have of unsupervised analysis through computer vision of what dogs or other animals are doing in this respect?

Blaise Agüera y Arcas

Yes, it is. Although, although it's on the edge. You know, one of the, one of the things, one of the issues that has, that has been raised for, you know, as a kind of AI criticism is the carbon footprint of training these giant unsupervised models. And you know, beyond even the carbon footprint problem, there are only a handful of data center, you know, of companies that have data centers big enough to be able to do this kind of unsupervised training at this point. So it's not very accessible in that respect. The good news, of course, is that a foundation model of the kind that I've been talking about only needs to be trained once. So unlike the old kind of machine learning where every task, every application required training a different model, the hope here is that we make one model or a very, very small number of models that are suitable for every task. But we have quite a lot of work to do in terms of optimizing Silicon and bringing its costs down for doing this kind of work, the past, it's a little bit like moving from internal combustion to electric cars. You know, we've had, we've had a century of optimizing for internal combustion. And you know, it has taken a while for, you know, for the electric car to be able to compete with that. And similarly we've had 70 years of optimizing for classical computing that is all about running programs, you know, that are programmed kind of in series, just sequentially written stuff. And neural nets are a very different kind of computation. Computation and optimizing for that and making it cheap and low power is kind of the mission for silicon architects over the next decade.

Kate Armstrong

Great! Thank you Con. 

Darcy, would you like to ask your question?

Darcy Kelley

Yeah.

So I'd just like to follow up on the bat comment. Having just finished writing a review of the underlying neural circuitry for their vocal communication. That includes their vocal communication. So there's a big database for Pteronotus parnellii and Jag Kanwal has that, lots of recordings. Those animals however, can't really be raised in the lab. So there's a lot of attention that's shifted to Carollia perspicillata, which has very interesting forms of vocal learning. So it would be great to be able to hook up with the Carollia folks and see if you could use this approach to make sense of the underlying logic of the vocal communication. Because they're going to be the going forward species for figuring out the underlying circuitry. So it's a comment more than a question.

Blaise Agüera y Arcas

Sounds great. I'm happy to put the relevant researchers in touch.

Darcy Kelley

Okay, great. Wonderful talk.

Kate Armstrong

Awesome. That's what we love. Erika, do you want to pose a question? Comments?

Erika Nesse

Yeah, so I just have this burning question about transformers, which is given the attention mechanism, how do we get from there to these concepts which are so powerful. Like if you could just explain in a few words. I've been sort of desperately trying to figure this out. 

Blaise Agüera y Arcas

Well, I mean, attention in its own right is really just a more powerful way of thinking about a short term memory than the kind of recurrent neural net. I would say that, you know, in the stateful neural nets, Stateful meaning that they're not like convolutional nets that, you know, where computation just flows through from beginning to end and you're done. They have state that is retained over a sequence. There have been sort of three chapters. The first was recurrent neural nets, RNNs of the ordinary kind which were invented, you know, well, everything was multiply invented, but they date back at least to the 80s. Then there were things like the LSTM of Jürgen Schmidhuber that involved gating units. So these are, you know, neurons that would be organized into little circuits that would sort of store, that would be able to store pieces of information during evaluation. The problem with LSTMs and other similar kinds of gating units like this is that they need to decide what to remember, you know, as it happens. And, and you don't always know what you're going to need. If you are, anytime you imagine answering an SAT question that involves filling in the blank, reading an essay, and then answering a question about what was meant sometime earlier, you always have to go back and refer to the original text. It's much, much harder if you're constrained to. You're going to read it through and you can't, you can't look back again, you know, like you have to, you have to be able to answer without looking back. And that's really the difference between these gating units and the context window. The context window means that the, that the history of the session is itself the short term memory up to some length limit. And that history is being encoded using a neural net into some kind of concept space. And that concept space has been learned in order to make this prediction task work. So it is a somewhat difficult thing to wrap one's head around. But the idea of redness or the idea of a volcano or what have you, all of those are concepts that if you learn them, if you're embedding space or your abstract representation brings out those invariances, brings out those concepts, it will increase your ability to be able to make those kinds of fill in the blank predictions. So, you know, it's just a matter of demanding, you know, what you demanding that understanding be necessary in order to pass this challenge of being able to fill in the blanks. And the understanding follows from that. These are all forms of generalization. The idea that, you know, a violinist is an instrument and a violin is fragile and made of wood and, you know, all these relationships, they're all things that help with the prediction problem. You know, when I was a young researcher, I worked with William Bialek, who is a physicist turned biologist and looks for kind of optimality principles and neural coding. And one of the optimality principles or one of the kind of grand theories that was floated a lot back then is that brains are for prediction. And that sounded good in the abstract, but I had a very difficult time, I have to say, imagining going from the idea of a brain, in some cybernetic sense, being good at prediction. I mean, that's okay. If you're a dragonfly doing the pursuit of an insect and you're just trying to sort of predict where you're going to intercept, you can imagine a neural circuit that will do that. But how does that result in all these complex linguistic behaviors? That just seemed like a giant mystery to me. What these transformer models I think demonstrate is that. Is that exactly that principle does give you all of these other things that are much more abstract and much more general as well. I know it's a bit mind blowing. I was very surprised by it. 

Erika Nesse

Yeah, it helps. Thanks.

Kate Armstrong

Okay. Brandon, would you like to ask a question? I don't know if you can. Oh, yeah, go ahead. Hello.

Brandon Keim

Thank you for the wonderful presentation. My question is just about the role of emotion and learning. And as I understand the literature, we now understand that really for, you know, all animals learning, it's intertwined with emotion. And those pathways of positive and negative affect and reward and punishment really shape just about every aspect of agency. And I think I've seen a few papers over the years on how incorporating emotion like systems into AIs did improve their learning ability. And I'm just wondering what you think about that and sort of what the role for emotion might be in the much more sophisticated AIs that are evolving now.

Blaise Agüera y Arcas

It's a really interesting question. So, first of all, you know, emotion in us, you know, is of course something that we share with many other animals, you know, to varying degrees, and is very ancient. You know, I mean, some of the sort of relevant neurotransmitters and systems may even be experienced in some sense by individual cells, by single celled organisms. So it's a very ancient and rich system. And there's no sense in which a transformer, which is just a sequence modeler, has any of that inheritance. It doesn't have dopamine or serotonin or any of that stuff. So there is a complete disconnect which leads to an interesting question. Sure. It seems to understand a bunch of stuff. Does it actually feel any of this? I think that the answer is no, but it's actually an impossible question to ask in a rigorous way. This was something that Alan Turing brought up, you know, in, in his arguments for the, for the Turing Test. He said basically, you know, like, all you can ever know or see is, you know, is, is someone's behavior. Every. Everything else is about inference. You know, it's about, it's about empathy and about, about trying to put yourself in the place of the other. And in that sense, it's a model. You know, we have empathy because we imagine projecting ourselves into the, into the, into the shoes of someone else. And, you know, certainly when you play those kind of games, we skipped through some of the dialogues that, you know, that illustrate some of that sort of stuff. Although the Ramesh, Matteo and Lucy one does that as well. You know, it certainly can pass all of the kinds of Turing tests that one would imagine, you know, of emotional reasoning and empathy. But, you know, a big difference, of course, is that all of that has been learned by modeling the behavior or by modeling how others appear to behave as opposed to by feeling all of those things prior to observing any of that behavior. So my sense is that emotions in us, in big biological brained beings like us and dolphins and so on, plays a few different roles. One of them is that it helps with our drives, you know, and this is the whole, you know, there's no cognition without emotion sort of thing. Having said that, I think the conclusion that all of our drives are based on just sort of a utility function based on emotions is definitely wrong. You know, otherwise, like, what are you doing, you know, wandering through a museum, looking at the paintings like, is this a mating ritual? Is it for food? You know, like, not everything is the four Fs. You know, it's a lot more complex than that. And all of that, a lot of that complexity in our case is cognitive. So it helps with learning, it helps with knowing how stuff feels before, you know, before kind of modeling it in an abstract way. People who are sort of empathic psychopaths, right, who don't have certain kinds of feelings, but learn kind of the hard way, you know, what, you know, how certain reactions will come across or, you know, or what, what others are feeling, even if they don't feel that themselves. You know, may be a kind of model for, you know, for how, you know, how a transformer does emotion, as it were. But, but, you know, this question like, you know, are the feelings real? I think is fundamentally unanswerable. The difference between, between simulating and having emotions. I just, I don't know how to really rigorously separate those things in a system that has, if you want to think about it this way, a completely alien neurophysical basis of behavior.

Brandon Keim

If I could just ask one, follow up early on. Rather than thinking about an AI learning emotion, would it be possible to program the very fundaments of emotion into the AI at the outset and just see what, see how it progresses.

Blaise Agüera y Arcas

I don't think it's straightforward to do that. I mean, we don't. And I'm also not sure that it would be a good idea, by the way, on the straightforward side of things. You know, we don't, we don't really, I don't think that we really know how to program emotions. You know, it's this kind of squidgy system that generates certain kinds of affective states in response to certain situations. You can do very simple things like program something to have artificial food and like the food or like to mate. But beyond the very, very simplest things that you can kind of make a cartoon of, it's not easy. And in fact, if you were to say like, well, okay, try and do it anyway, I think I would attack that as a machine learning problem. And then suddenly that machine learning problem looks exactly like what, you know, like what we're asking Lambda to do anyway. You know, it's just another task. Does this make you happy or sad? And so on. Right. So it's already implicit in what's going on. But, you know, and this speaks a little bit to actually Peter's question. I would love to learn more on the moral side and also learn if you think wisdom might be something that can gradually be learned as well as intelligence. You know, it's, it's in many ways our, our, our aspirations about, about moral behavior are at odds with, with the four Fs, you know, or with our very, our very basic kind of emotional impulses. And the idea of creating that kind of, of conflict of drives in an artificial system is something that would definitely give me pause. I don't want to create an artificial system that craves attention or energy or has the urge to domination or any of these kind of things that come with our biological inheritance and then try and give it some inner conflicts to overcome all of that. Better to school it, you know, in, in, in moral and ethical behavior as well as we can and, and have that be the, you know, the, the, the fine tuning without thinking about that being in conflict with some, you know, with, with some other, some other emotional stuff. Right. Better for it to model our emotions and model our better angels and, and act in accordance with the better angels with an awareness of our emotions than to try to, you know, imbue it with all of the same struggles that we all have individually. That's my view.

Kate Armstrong

Any other questions? Burning topics popping up? If nobody else has one straightaway. I do have one. Blaze, it's back on my. Similar to my first question, which is what could this community do? What could the Interspecies Internet group do in order to help advance the work that's going on right now around this idea of the Google Translate or this idea of the translation system for animal communications?

Blaise Agüera y Arcas

Well, so one of the things that I am trying to get off the ground right now and you know, will take a while because there are research, there are research problems that are not trivial that have to be solved in order to really make this go. But I'm trying to advance this idea of a neural web which will be a foundation model that is multimodal and that includes training data of every kind. So you know, whether that's bad vocalizations of one species or another, or you know, or protein sequences or fungal chemicals signals or whatever it is. I think that ultimately we're tending to build one foundation model that includes all of these things and that can translate every which way. That's not something that should be owned by any company. It's a planetary system and it's something that if it is owned, is owned by everybody. Not by Google, not by anybody else. The infrastructure for doing that will require a sort of decentralized training regime for these large models that allows federated learning to mix with non federated learning wherever the, there are large caches of data that exist and that keeps track of contributions in terms of contributors. The reason that keeping track of contributions in terms of contributors matters is that one would like something like a proof of learning that assigns something like voting shares. It sounds a bit like a cryptocurrency, but I'm not so much interested in a tradable or a fungible community commodity here as in the equivalent of voting enfranchisement that allows for governance of that foundation model to be distributed among all contributors. S o that's a long term infrastructure and research problem. In the near term we are still dealing with a situation which only a handful of data centers can train these giant models. And so there have to be, you know, sort of exchanges of data, you know, of the kind that we just, that we just described and it's point to point collaborations and research papers. So you know, I would say, you know, if, if you have data sets that, that would be interesting to try experiments of this kind on, you know, we, we may be able to do that. I mean, I obviously, you know, have to protect the time of my own researchers to make sure that they keep on advancing the general problems that will, you know, will benefit us all versus getting too hung up too quickly on individual, you know, sort of instantiations of these research problems. But, you know, ultimately the hope is to make a shared foundation model for all of us.

Kate Armstrong

Okay. We'll have our work cut out for us. That's good. Good to know. No, it's certainly not a small task at all. And I think today we've not only seen the inside of, you know, what this is really going to take, but also the potential for such, such developments and how the two, I guess the two somewhat disparate fields will potentially come together through this kind of technology. And I think that's very promising and very exciting for many of us in this room who have been working for many years on the animal communications topics. So I don't know if there's any final questions. We're almost at the half hour point, so I think in respect to everybody's Saturdays, we can probably close it off. So unless there's no final words, I think it would be a good moment to leave it and say thank you very, very much for joining us, Blaze. It was very insightful and I don't know if you have a final comment to make at all.

Blaise Agüera y Arcas

I think I've made them. Well, okay, maybe if I'm going to give a final plug, it is that I see an incredible degree of polarization happening in the world, world today. You know, we, we all see this. One of the many, many forms that that polarization seems to have taken, you know, in, in the last couple of years is, you know, sort of pro and anti tech, pro and anti AI. And you know, my, my hope is that, is that we can sort of hang together on this one and not further those polarizing narratives. I feel like the kind of capabilities that I've just described, they can really trigger our insecurities as humans about what makes us special or different. They can trigger economic insecurities as well that are very real, that are rising as we see increasing inequality, quality in the world and that results in a kind of reactionary response. My hope is that this community which is used to questioning human exceptionalism in certain ways, will sort of rise to the challenge of thinking about intelligence at planetary scale and these kind of systems as a part of us in some bigger us kind of sense, as opposed to kind of continuing the fractiousness, the othering, the, you know, it's just a machine, it's just a this, it's just that, you know, a lot of that sounds a lot like, you know, it's just an animal. And I would hope that that pattern would be evident.

Kate Armstrong

Fascinating. Thank you. I think there's a lot of, a lot of love for that final, final thought. So thank you very much for joining us, and I hope everybody has a wonderful rest of their weekend. Thank you, Neil and Peter from our trustees, and thank you, everybody else for coming along and supporting and joining the discussion. So we will see you in our next event. Thank you very much.

Blaise Agüera y Arcas

Thank you.

Previous
Previous

Transcript 8 | “Listening Across the Tree of Life”, with Karen Bakker

Next
Next

Transcript 6 | Octopuses and Intelligent Aliens from Space, What? with Jennifer Mather