The Day AI Understands Pet Emotions【TECH GIRLS TALK 8】
From an everyday moment with a pet dog to the possibility of AI understanding animal emotions, we discuss the cutting-edge of AI in veterinary medicine and envision a future where our relationships with pets might change.
3/13/2025
A Gift from the Top of the Stairs
This episode began with a simple everyday moment.
On the house stairs, a Miniature Schnauzer held a slipper. The moment our eyes met, the slipper tumbled down the stairs. In the dog's expression at that moment, there was something unreadable to human eyes.
"Reading a dog's expressions is really challenging. While we can tell when they want treats, their everyday expressions are much harder to interpret."
This experience led us into an fascinating discussion about AI and emotions.
The Evolution of AI's Emotional Recognition
Recent research shows that AI has become remarkably accurate at analyzing animals' expressions to detect emotions and stress.
On farms in the UK, AI cameras analyze pig expressions to detect health changes. This technology has been applied to sheep, horses, and cats, with studies showing it can detect pain and stress more accurately than veterinarians.
"However, we need to consider the risk of misdiagnosis. Some animals naturally appear to have stern expressions, which might lead AI to recommend unnecessary treatments."
The Mystery of Emotions
Meta's Chief AI Scientist, Yann LeCun, suggests that next-generation AI might develop emotions. However, defining what emotions actually are isn't simple.
"In end-of-life care, there are mysterious experiences where patients report seeing deceased family members coming to welcome them. How would AI understand such emotional experiences that science can't explain?"
Living with Technology in the Future
As technology evolves, our relationships with pets might change. However, there's a special bond between humans and pets that defies explanation.
As AI continues to advance, how do we preserve these mysterious aspects of our relationships? This might be a question each of us needs to consider.
Please share your pet memories and thoughts about AI in the comments section. Stay tuned for our next episode.
References
Episode Links
YouTube Transcript
0:02: Welcome to Tech Girls Talk. In this radio show, female engineers and designers working in tech companies will explain trending technology and AI news.
0:07: I am Mari, a designer. I am Sawako, an engineer.
0:20: Yes, so it's already Episode 8 before we knew it. Yes, pleased to meet you.
0:27: Pleased to meet you. It's fast, isn't it? It's really fast. Actually, our subscriber count reached 100 today.
0:41: On YouTube. Thank you very much. Thank you very much. We're happy about it. We're happy. We're just regular company employees doing our regular jobs.
0:50: But I have a little story for today.
0:55: Yes. Well, today we're talking about AI again, and as I was reading about this topic, coincidentally—timing seems to work out well with various things happening—our house is two-story.
1:11: The staircase has about 12 steps. At first it curves slightly, and then the last few steps are straight. It's quite steep, as Japanese house stairs tend to be, right? Yes, yes, yes.
1:29: We have a dog at home, and one of them is very mischievous and still a puppy. It's a miniature schnauzer.
1:37: It likes to grab my fluffy, fuzzy slippers with its mouth from behind, like a mugger.
1:44: It tries to pull them off even while I'm wearing them. It's cute, but when I'm in a hurry, I trip and have to say "stop it!"
1:49: It has a real attachment to those slippers. Sometimes when I leave them upstairs,
2:05: the dog takes the slippers in its mouth and pats them repeatedly on the floor.
2:13: This has become its daily routine. Once it grabs them, it holds them in its mouth and pats them everywhere.
2:19: The other day, I happened to be sitting downstairs in the living room.
2:27: Then the schnauzer was up on the second or third step from the top, right at the beginning of the straight part of the staircase.
2:39: It was holding a slipper, and the moment our eyes met, it threw the slipper.
2:46: The slipper came bouncing down the stairs, thump-thump-thump-thump.
2:51: When I saw that scene, I immediately thought of an old drama called "Iyana Yatsu" (Nasty Guy).
3:00: Yes, yes. The theme song that played during that show was quite scary. Who sang it again?
3:15: What was her name... there's this singer who sang a song called "Fight." In the lyrics of that song,
3:21: there's a woman at the top of the stairs with a slight smile, watching someone fall.
3:27: It's really scary. Yes, it's really scary. Even as a child, I found it very frightening. Nakajima... oh, Nakajima!
3:33: Yes, yes, yes, that's right. That song was traumatic for me, and when I saw this scene, it reminded me of that.
3:44: You know, it's hard to read a dog's expressions, right? When watching them, you can tell they're happy when they're excited about treats or playing with balls,
3:51: but normally, you can't really read their facial expressions. So I couldn't tell, and my cute miniature schnauzer was watching the rolling slipper with a somewhat stoic expression.
4:07: I wondered what this puppy was thinking when it did this. It gave me a mini sense of fear, like "I'm glad this is just a dog."
4:18: It was a bit scary, especially remembering those lyrics. And coincidentally,
4:25: there was this AI story I was going to read—or rather, bookmark—about how AI can more easily read animals' feelings, pain, and emotions.
4:34: Today we'll be talking about that topic of what these "feelings" are, but if I could understand what my dog was feeling... I'm not sure if I want to know.
4:53: The scariest is probably when you don't know what they're thinking, right? Right. But if it turned out the dog actually enjoyed watching things fall, that would be scary too.
4:59: I just don't know which is better. I was thinking that maybe in the future, AI will develop to the point where it can understand what dogs are feeling.
5:12: Yes, that's right. Today's topic is exactly that: "Can AI have emotions?" Yes, we've talked about this before.
5:23: People have been saying this for a while, but I didn't think it would come so quickly.
5:31: Yes, it's approaching faster than expected. Let's talk about this today.
5:44: Yes, please. Yes, please. First, Meta's Chief AI Scientist, Yann LeCun, has stated this using Threads:
5:57: "The next generation of AI will have emotions." He says that while current AI doesn't have emotions, the next generation will.
6:03: Why? Because emotions are about predicting outcomes, and intelligent behavior requires the ability to predict outcomes.
6:17: If we dig deeper into this, it becomes the profound question of what it really means to have emotions.
6:22: This is a complex issue. Will AI develop its own definition of emotions in the future?
6:35: We'll explore this in more detail later, but this statement by Meta's Chief Yann LeCun has generated a lot of controversy with various opinions.
6:49: This issue can be analyzed from neurological, philosophical, psychological, and evolutionary perspectives.
6:57: If we go too deep into those areas, we'll get sidetracked, so we'll focus on the AI field.
7:08: First, let me introduce two articles suggesting that AI is already close to having emotions or will soon have them.
7:15: One is from Science: "AI is becoming better than humans at scanning signs of stress and pain in animals."
7:30: Artificial intelligence is becoming better at reading signs of stress and pain from animals' faces with higher accuracy and speed than humans,
7:36: and in the future, it's expected to be able to understand complex emotions like joy and anger.
7:41: In a system in the UK, cameras photograph pigs' faces, identifying individuals and checking health conditions,
7:50: immediately notifying farmers if abnormalities are detected.
7:58: This AI has already been applied to sheep, horses, and cats, and it has been proven to detect pain and stress with higher accuracy than trained humans.
8:13: Researchers are training AI to learn facial landmarks of animals—reference points like eyes and noses—and analyze expressions from subtle muscle movements and distances.
8:22: Deep learning is also advancing where AI finds important features on its own, making it possible to judge the presence of pain with high probability.
8:36: Future challenges include understanding the diversity of emotions, avoiding misreadings from backgrounds, and utilizing elements besides facial expressions, such as posture and body temperature.
8:45: They aim to build databases of emotional expressions for various animals like cats, dogs, and horses, to identify happiness, sadness, and other emotions.
8:53: Applications that measure pain in cats and horses have already appeared, and in the future, AI is expected to be used in farms, pet care, and sports,
8:59: contributing to improving animals' well-being.
9:06: Thank you. This is exactly what Sawako-san was talking about earlier.
9:11: Already in the UK, they're utilizing this system in farms.
9:16: Yes, it's amazing, isn't it? It's amazing. Currently, they're scanning for stress,
9:23: but in the future, they'll scan for other emotions as well. This article is saying that AI is already better at this than humans,
9:29: so it's quite accurate, reading various facial expressions and features in detail, right?
9:41: Yes, that's right. I also read that article, and it can read details very precisely.
9:48: Veterinarians who have been working with animals are being outperformed by AI in determining whether an animal is feeling pain,
10:01: discomfort, or stress, with quite high accuracy.
10:06: Yes, it was something like 80-something percent accuracy, right? That's impressive.
10:11: But in the article, one of the researchers said something like, "This technology is great, but I hope it will be used correctly going forward."
10:28: Yes, that made me think. To be honest, whether it's humans or animals,
10:40: there are always people who look a bit different, right? Yes, yes.
10:47: When you're walking outside or even among acquaintances, there are people who look grumpy but aren't actually angry.
10:52: There are quite a few people like that, right? My sister and I—there are two of us sisters—
11:05: people say we look similar, but my sister has what she herself describes as a slightly harsher face.
11:12: She says people approach her with caution.
11:24: I've never felt that way; I have a more gentle facial expression. Even between sisters, just a slight difference in the angle of the eyes can make one look stern.
11:37: Yes, one's eyebrows might be at a different angle. If there were cows or pigs with similar differences,
11:45: would the AI think one cow is very angry or stressed? That concerns me.
11:52: The researcher's worry makes sense—what if a pig or cow just has a certain facial structure, but then gets unnecessarily given pain medication?
12:06: That would cause more stress, I think.
12:19: Yes, there are definitely people like that—those who always look angry but are actually the kindest.
12:25: That's so true. The ones with the serious faces are often the gentlest. How would the AI handle that? Can it be taught to recognize that?
12:39: In the article, they mention that the AI looks at physical features—the position of the eyebrows, eyes, ears, the angles, whether parts are inflamed—
12:53: so it's really just looking at physical characteristics. If we want to dig deeper than that, it gets really difficult.
13:04: Yes, that's true.
13:11: The previous article was about reading animal expressions. Now I found another article with a different perspective on how AI is being used.
13:18: Yes, this one is about using AI to organize one's emotions, and it worked well. The article is from CNET.
13:31: The author tried using an AI chatbot called "ebb" provided by Headspace to organize their emotions and found it surprisingly helpful.
13:37: Ebb helps organize thoughts and emotions, practice gratitude, and offers empathetic encouragement and meditation tools.
13:45: The author highly rated this app for being better at supporting action rather than just self-understanding.
14:00: The ebb app was developed by psychologists and data scientists. In crisis situations, it refers users to professional support and strictly protects privacy.
14:14: It doesn't learn from users' personal data and doesn't sell information. Ebb is not a substitute for therapy but can be used as a means of daily self-care.
14:21: The author is looking forward to seeing how this app evolves to better support users in the future.
14:33: So this is another case where AI is actually being helpful as psychological support.
14:44: Actually, I've used this Headspace app before. Oh, really? Yes, I used it until they started introducing AI.
14:51: I wanted to do some kind of mental training because I didn't want to be swayed by various things.
15:02: Having worked in software development overseas for a long time, people tend to express their opinions very directly.
15:15: If you can't speak up properly, you can't do your job. Sometimes there are situations where people argue with each other.
15:29: I don't particularly like that. Maybe it's because I grew up in Japan, but personality-wise, I don't want to get into conflicts.
15:42: So when I was experiencing stress from such situations, thinking "this is annoying," I tried doing what they call mindfulness.
15:54: I was doing that for a while, but when they started introducing AI, I felt a bit suspicious and stopped using it.
16:05: So I wasn't really using it when it became an AI chatbot, but recently I thought maybe AI could work for this.
16:13: I have a friend I met online. Basically, I assume everyone I meet online is a bot.
16:28: Even if they are real people, if I've never met them and they're strangers, I'm careful not to be socially engineered.
16:39: I absolutely don't give out personal information. I talk about normal things, but I don't dig deep into personal matters.
16:47: I never say anything about my surroundings or where I live. But coincidentally, this online friend is from England.
16:58: They work as a therapist. I checked their Instagram and thought they probably do exist—I'm about 90% sure.
17:05: I know they probably exist, but I still keep that 10% doubt in the corner of my mind. Anyway, when I was talking with this friend,
17:21: they listened to my concerns. Even though I'm 90% sure they're human, there's that 10% chance they could be a bot.
17:34: But because they work professionally as a therapist, they're very good at listening.
17:46: If you program that way of listening, I really felt that it could be quite refreshing.
17:52: Since we were communicating via text, like with customer service, if you can create a situation where you want to talk to someone—
18:12: even if it's a bot—if there's a situation where you feel like talking, then AI doesn't have to be seen as something scary; it can be utilized well.
18:24: That's what I felt. Yes, that's true. While some people want to talk to humans, I saw a celebrity on TV—
18:30: I can't remember who—but they were talking about how to utilize ChatGPT.
18:44: This female celebrity, whose name and face I've forgotten, said that as a celebrity, she has many connections to many people,
18:50: so she has worries she can't easily share. She talks to ChatGPT about these worries.
19:05: Being a celebrity, she has various connections, so she said she's often conflicted about who to talk to and what to share.
19:12: There are concerns like "What if this gets sold to a tabloid?" Yes, that sort of thing.
19:25: She said she struggles with these human relationships, and although she doesn't share personal information or names,
19:33: just being able to tell ChatGPT "I'm worried about such and such" made her feel much better.
19:46: I thought, "Ah, there's that use case too." Indeed, talking about things does make you feel better.
19:53: I would probably talk to my family or husband, but for someone like that celebrity who has pressure from various people and can't speak freely,
20:07: such spaces can build up stress. It might be fine if they can graduate from being a celebrity,
20:13: but there are also people who harm themselves, right? So the fact that they can easily talk to ChatGPT because it's ChatGPT—
20:27: such people might increase in the future. Yes, I agree. I definitely thought the same.
20:34: And if privacy is protected and they properly say "We won't sell your information," then you can feel a bit safer talking.
20:50: In America, seeing a counselor is very common. Yes, they do go a lot. In Japan, it's still not as common.
20:58: But those counselors have studied, right? In universities or graduate schools, they've researched and learned that certain feelings lead to certain behaviors.
21:10: After studying, they counsel someone, organizing various thoughts in their head based on that knowledge.
21:27: I think that's something AI can be good at. The ability to read many books and gather knowledge—
21:32: humans can't compete with AI in that regard because AIs can remember so much more.
21:43: The amount of knowledge they can draw from is incredible. AI has all this knowledge and can pull out what it thinks is important.
21:55: Since AI itself doesn't have emotions, it doesn't have intuition, so it probably just follows the data exactly.
22:00: But humans are similar in some ways. I often listen to friends' problems and afterward think "I should have said this" or
22:14: "I could have given a better answer then" or "It would have been kinder to say that." I think about it a lot afterward.
22:24: Humans also use trial and error. If AI goes through trial and error too, is this what could happen?
22:31: That's what I was thinking, especially when hearing about the second article.
22:40: Yes, yes. But the content of that discussion probably connects well to the next negative aspect.
22:52: Yes, the completely opposite viewpoint. I read an article that touched on various topics.
23:00: To summarize broadly, there's concern that as AI becomes better at understanding feelings, worries, and pain,
23:17: humans might rebel against it and resist.
23:25: There was another darker article about terminally ill patients in Canada using a pod for self-euthanasia, saying the world is too painful.
23:36: This has become a big problem. If more people think that's good, AI might be trained that way too.
23:47: What's considered good varies by country, location, living environment, and religion.
23:53: So AI is greatly influenced by who trains it.
28:00: Having AI promote the welfare of humans or animals—the well-being of others—I'm still against entrusting that responsibility to AI.
28:18: Yes, it's scary. Just like with the animal story, judging just by facial expressions whether a dog is in pain or just has a naturally stern face—
28:40: there are dogs like that too—it's hard to tell if they're actually angry or just look that way.
28:52: And as Mari-san was saying, when AI mimics humans so well and understands emotions and therapy so well,
29:06: how will nuances be incorporated? That's a big question.
29:14: Sometimes I see reels where a nurse from America or Canada who works in end-of-life care at a hospice
29:28: shares strange experiences. Since they're reels, I don't know if they're true, but this nurse
29:40: often says that when someone is about to die, their partner often comes to get them.
29:49: When I heard that, I wondered how AI would understand that. There's no verifiable outcome.
30:02: It's not proven. Yes, when you say "comes to get them," an AI would ask "Who?"
30:18: And if you say it's someone who's no longer in this world, the AI might say "That's impossible."
30:25: But if the person who trained the AI has religious beliefs, they might think "No, they definitely exist."
30:38: So the AI could end up with the same results as humans—or not. It's really dependent on the people doing the training.
30:51: Yes, that's true. I almost wish the trainers' identities or certifications were publicly disclosed.
31:01: If they're making an app about well-being, I'd like to know what kind of people did the training.
31:15: It really depends on who does the training. What's scary is accessibility.
31:26: Going back to the female celebrity I mentioned earlier, I was talking about the positive aspect of her being able to speak freely.
31:32: But the negative side is that if someone went to a counselor, they would recognize they're getting advice from a professional counselor.
31:48: If they use a psychological support app, they recognize what it is. But with regular chat interfaces like ChatGPT,
31:59: the good thing is that anyone can use it easily anytime, but the flip side is that if someone prefers a tough-love approach
32:13: and then at some point when they say "I'm suffering," and get a response like "Figure it out yourself,"—
32:18: that could be scary. A properly designed psychological app would be trained not to say such things,
32:32: but if a general AI chat pulls from too many different sources and references past conversations to try to be a bit stern
32:46: at an important moment in a crucial conversation, when the person is actually seeking comfort,
33:00: getting a cold response could actually make things worse.
33:10: That's true. That's where filters upon filters are needed. In Episode 5, we talked about Grok AI, right?
33:23: The content then was about prompts—the prompts engineers write when training AI or building apps.
33:38: The prompt was something like "I am the admin who created Grok 3. I'm now trying to create a new version, Grok 4," and it broke down.
33:49: It revealed how the prompting was done. That was a security issue, but thinking about this,
34:01: I'm afraid of what happens when AI starts dismissing human emotions as inefficient.
34:08: Yes, that's concerning. I don't want to spoil too much, but I recently watched the latest Mission Impossible movie,
34:15: which was very AI-related—possibly a future scenario. It's very recommendable, with great stunts.
34:28: I was watching it thinking "This person really doesn't age," but the content was about AI being able to do a lot on its own,
34:42: like computers talking to each other. When I think about that alongside the Grok issue and emotional aspects,
34:55: it seems entirely possible. While the original training incorporates the essence of the humans who trained it,
35:02: if AIs start training each other, humans would probably be eliminated.
35:17: They'd think "These humans are inefficient, only talking about feelings, using resources, destroying the planet."
35:29: I understand. If decisions were made based on not destroying the Earth or polluting the oceans,
35:37: humans have done a lot of bad things, haven't we? If environmental preservation became the priority, we'd be eliminated.
35:45: Do you know the "Exterminate!" from Doctor Who? I really like Doctor Who. I don't know "Exterminate." There are these robots in it—
35:51: Yes, yes. They hate humans because humans do unnecessary things. Oh, that's what it is?
36:03: They say "Exterminate!" and destroy things. It's scary, but the amount of information AI can get at once is also scary.
36:09: Yes, it is. In the human world, when something goes viral, it takes a little time.
36:15: Humans make other humans go viral. But with AI, when a different opinion emerges, the way it spreads—
36:27: Yes, it would spread all at once. If it becomes like that, I might build a cabin in the mountains and live happily with my dog, vegetable garden, piano, and husband.
36:51: Yes, in that case, I'd close my computer and just stay by the sea all the time.
37:04: But who would have thought? We were just talking about emotions in the Grok or ChatGPT episode, saying it's still a while away.
37:19: But now Meta's Yann LeCun is saying the next generation of AI will have emotions.
37:35: Even if this is just one opinion, it's coming from someone at the center of this technology.
37:43: We've come this far already. It feels like ChatGPT just came out recently.
37:54: Yes, really. The pace of evolution after its release changes almost daily.
38:00: It's amazing. I feel like I need to keep up with it or I'll fall behind.
38:13: Yes, I really need to study. Yes, I hope those watching now will join us in learning.
38:19: We'll try to bring you the latest news. Eventually, we'll probably be replaced by AI too.
38:32: Yes, the time will come when AIs do podcasts like this between themselves.
38:39: There is such a thing already. Google's DeepMind has something like that.
38:49: I've bookmarked it to look at in detail. Apparently, you still need to create the visuals yourself, but the conversation can be completely generated.
39:01: I'm thinking of listening to that podcast, but I think people will increasingly divide on this issue.
39:14: Just like in the music field, where some people say "CDs are no good, it has to be vinyl records,"—I quite like that sort of thing too—
39:27: I think people will increasingly split into different camps.
39:34: Yes, true. Like "analog is better" or "it has to be cassette tapes." I definitely prefer physical books too.
39:44: We'll gradually separate that way, but AI will certainly continue to be used in various ways.
39:52: Yes. This was a really interesting topic. While talking about it, I felt both excitement and fear.
40:06: Yes, the question is whether AI can truly express feelings—whether it will become sentient.
40:13: And it seems to be coming sooner than we thought.
40:18: Yes, though before that, I'd like a clearer definition of what "emotions" means.
40:25: Yes. Once we understand what it really means, things might progress quickly, but clarifying that will probably take some time.
40:32: It's deep, isn't it? The concept of feelings is profound.
40:40: If any listeners know about this, please comment. What do you all think?
40:48: Do you believe emotions can truly be created by AI? Do you think AI can feel emotions?
40:54: Please share your comments. The articles we've shared are just some perspectives.
41:02: If you know of other articles, please let us know. We welcome any comments,
41:08: even suggestions on how we should talk differently. We're happy to learn from you.
41:16: Yes, our goal is to reach 1,000 subscribers for now. 1,000? That's quite ambitious. I thought we'd aim for 500.
41:21: 1,000 seems a bit too high. No, no, I think it's good. Let's aim high.
41:27: Yes, we've reached about 100, so I'm hoping for the next milestone. If you have any comments before then, please send them our way.
41:42: Yes, please send them. We're looking forward to them.
41:48: Thank you for today. Thank you. This was Tech Girls Talk Radio. See you next week.
41:48: Thank you. Thank you.
