GIRLS TECH TALK podcast cover

1. From Cats & Babies to a New AI Chat Service

In this episode of TECH GIRLS TALK #4, engineer Sawako reflected on her challenging week taking care of "her sister's baby and pet cats," which naturally led to the theme of "Cats and AI" inspired by the name "LeChat."

LeChat is a recently launched high-speed AI chat service with a name reminiscent of the French word for "cat." The conversation shifted from casual chat to cutting-edge tech developments.

2. LeChat and Mistral Technology: Why Is It Called the "Fastest"?

LeChat's biggest feature is its ability to provide almost instantaneous responses by incorporating the high-speed processing technology called Mistral. While conventional LLMs (Large Language Models) like ChatGPT take several seconds to tens of seconds, LeChat can generate multiple words in about one second, which is astonishing.

  • Key Points of Mistral Technology
    • Optimizes the "next word prediction" process during AI inference, executing large-volume calculations quickly
    • Optimizes traditional GPU dependence, enabling high-speed operation at relatively low cost

Engineer Sawako remarked, "I thought even ChatGPT was fast, but LeChat is on a different level," impressed by its response speed.

3. Business Impact: What Changes with High-Speed Responses?

Improved user experience and operational efficiency are expected. The following points seem to be attracting particular attention:

  1. Faster Inquiry Responses

    • Real-time high-speed processing of customer support and Q&A
    • Cost reduction as employees can focus on other tasks
  2. Creative Support

    • High-speed generation and drafting of large texts → Acceleration of brainstorming and planning stages
    • Humans handle finishing touches and checks, increasing productivity
  3. Adoption by Small and Medium Enterprises

    • Design doesn't require traditional large-scale GPUs → Lower barriers to adoption
    • AI utilization may no longer be the privilege of only certain large companies

4. What About Human-like Qualities?

In this episode, Mari and Sawako chatted about human-specific "emotional and mood fluctuations" like "waking up and not feeling well." While AIs like LeChat excel at learning vast 0/1 logic and quickly deriving optimal solutions, they struggle with "moods" and "emotional judgments." That's why for businesses, the key seems to be how to combine AI's accuracy and speed with uniquely human flexibility and creativity.

In the next TECH GIRLS TALK, we plan to delve deeper into the newly emerged debate about "AI with consciousness." Don't miss it!


Reference Links

Episode 4 (LeChat Episode)

Episode 3 (Meta Episode)


YouTube トランスクリプト

0:02: Welcome to Tech Girls Talk. In this radio show, female engineers and designers working at tech companies will explain trendy technology and AI news.

0:07: I'm Mari, the designer. I'm Sawako, the engineer.

0:12: So here we are for this week's episode. Thank you for joining us. Thank you for having us.

0:18: It's already Episode 4. Yes, time flies. It really does.

0:24: Time goes by so quickly. How have you been this week? This week has been quite busy.

0:31: Busy? Yes, busy. My sister recently had a baby. Oh, congratulations! Thank you. It's been quite hectic because, you know, babies create a lot of commotion.

0:41: And my sister has a cat. Oh! That cat is super cute. I have cat allergies, but since it's a long-coated cat with longer fur, I'm actually okay around it.

0:49: She has two cats, and one of them, the first resident cat, is extremely shy. When I occasionally visit, it hides under the bed and never comes out.

0:56: I almost forget it's there. It's so cute and such a handsome boy, but it's shy and doesn't come out.

1:04: The other cat is female and super social, almost like a dog. She comes to have her belly rubbed and is really adorable.

1:11: They have completely opposite personalities. And now that the baby is born, they're trying to find a balance.

1:18: When it was just pets and adults living together, now with a baby, all the adults' attention goes to the baby.

1:25: There's definitely some jealousy. Yes, that's right. So while protecting the baby, we're also thinking about how to keep the cats happy.

1:31: My sister and I considered various solutions like restricting bedroom access and other measures.

1:36: It's really tough, but eventually, we settled on dedicating one room entirely to the cats.

1:43: It's filled with toys, about five cat houses, a jungle gym, food, and everything they need.

1:49: Things have calmed down now, but until the child grows a bit bigger, they'll need to stay in that room.

1:56: I almost said "put up with it," but it's hardly suffering—it's actually quite luxurious!

2:02: They have a huge space, so I think they're fine. Eventually, when the baby grows up, they'll probably be allowed to roam more freely.

2:08: We spent last week figuring all this out with my sister. When I saw today's topic, I thought "Le Chat" is French for "cat," which reminded me of this cat story.

2:13: I remembered how challenging it was while coming to record this podcast today.

2:18: That's great! A perfect lead-in. Yes, it's like I worked hard last week just for this perfect introduction.

2:23: It's a perfect segue into today's topic. Yes, today's topic is about Mistral's Le Chat.

2:30: "Le Chat" or maybe "Le Chat"... I think it's a wordplay on "chat" as in conversation. It's a nice name, isn't it? Cute!

2:36: It's a completely different name compared to other AI chat services so far.

2:42: In katakana, would it be "Le Chat"? I've also seen it as "Le Chat." Well, we'd need to ask a French person to know for sure.

2:48: I think either is fine. If you want to sound fancy, you can say it with a French accent like "Le Chat" with a nasal sound.

2:54: Yes, exactly. I'll pronounce it as "Le Chat" today, but if my pronunciation bothers you, please be forgiving.

3:00: If you know the correct pronunciation, feel free to leave a voice comment and let us know.

3:06: So today's topic is about Le Chat. Yes, it's about this newly released AI chat service.

3:12: They keep coming out, don't they? They really do. Every week we have a new topic, even though we do this podcast weekly.

3:18: There's always new AI chat content to talk about. It's really amazing. Yes, really amazing.

3:24: So today, we'll be discussing this. Yes, thank you. Thank you.

3:30: First, I'd like to introduce an article. Yes.

3:36: Cerebras technology will be used in Mistral's AI chat service Le Chat. This adds a new feature called "Flash Answers" that can respond to questions almost instantly.

3:42: Normally, AI chats can take tens of seconds to formulate responses, but Le Chat can process over 1,100 words per second.

3:49: This makes it 10 times faster than other famous AI services, making it currently the world's fastest AI assistant.

3:57: What enables this high speed is Cerebras' special technology. It runs AI differently than conventional computer systems, allowing for very fast processing.

4:03: Particularly when thinking about long text, it predicts the next words and pre-calculates parts, further increasing speed.

4:10: In Le Chat, when using Cerebras technology, a "Flash Answer" mark is displayed, indicating a much faster response.

4:18: They plan to expand this technology further, with plans to extend support to other AI models in 2025.

4:23: Yes, thank you. Thank you. This is from Le Chat's official website.

4:30: So, about this trending Le Chat, have you tried it? Yes, I have. It was really fast, incredibly fast.

4:36: I thought ChatGPT was impressive at first, but this new one feels like they've really invested in it.

4:42: It handles both Japanese and English really well.

4:47: [Music]

4:59: Yes, you can really feel the difference. It's surprising. Yes. When ChatGPT first came out, it was shocking how quickly it responded.

5:05: But Le Chat is even more surprising. Yes, it's extremely fast. The fastest.

5:10: It's really amazing. So today, we want to dive deeper into this Le Chat.

5:18: We'll ask our professional engineer, Sako-san, about how it can be so fast and what makes it different.

5:24: There were a lot of technical terms, and I was wondering how to simplify this. For those interested in the technical details about processors, there's an article on ainet.medium.com titled "How Cerebras Made Inference 3 Times Faster: The Innovation Behind Speed."

5:30: If you're not interested in that, I'll explain it simply.

5:36: But first, I need to cover some computer basics. When I first learned programming, I was really curious about how computers work.

5:42: Essentially, old computers operate on just on/off switches. That's all they do.

5:48: You know those scenes in The Matrix with all the green 01101 code running in the background? That represents what computers understand—just those on/off symbols.

5:54: To simplify greatly (there's a lot more happening underneath), computers understand this binary 0101 language.

6:00: That's why older games were mostly turn-based RPGs. You know what turn-based games are like?

6:06: "You're going on a quest. Will you go through the forest or take the road?" Wait, that sounds like a quiz. "I'll go through the forest." "Ok, will you cross the pond?"

6:12: It's that kind of experience where you choose between 2-3 options each time.

6:18: This is what you're always made to create when you first learn programming. Turn-based games where in your head you might imagine battling monsters, but really you're just making choices: fight or run away?

6:24: Giving these kinds of instructions to the computer is what programming is.

6:30: AI works the same way because it's still a computer. It starts with this binary 0101 language.

6:36: How much of this binary data can be stored and how many bits can be processed is the task of processors—these wafers or processors.

6:42: The bigger they are, the more 01s they can contain, and the faster they can swap them around.

6:48: That's why Cerebras 3.0 is said to be so fast—it can handle more binary data.

6:54: Building on this, AI uses something called neural networks, which sounds fancy.

7:00: You've probably heard it in sci-fi movies where AI takes over the world. They always mention neural networks.

7:06: But it's basically like those turn-based games. It starts at one point and branches out like a tree, processing 0101 according to rules.

7:12: So when you tell a chatbot "This is a cat," during training, the neural network needs to understand what a cat is.

7:18: Humans immediately think of fur, whiskers, eyes, etc. But the AI has to work through it step by step.

7:24: When asked "What is a cat?", it first identifies the word "cat," then determines it's an animal, then categorizes what type of animal, and so on.

7:30: This is a very simplified explanation—computers actually work at the pixel level, analyzing colors and many other aspects.

7:36: But simply put, the AI analyzes patterns: "This cat-like entity belongs to the animal branch, and within animals, it matches these characteristics..."

7:42: Eventually, it concludes, "Yes, this is a cat." That's how it provides answers.

7:48: It's a very simplified explanation, and technically there's much more detail, but that's the general idea.

7:54: Was that understandable? I tried to break it down a lot, though even as I was speaking, I wondered if it was clear enough.

8:00: No, it was very clear. Good, I'm glad. So that's how AI stores information it's been taught in its memory.

8:06: It compares new inputs against that memory while continuously adjusting its understanding, saving these 01 patterns.

8:12: The more humans teach it, the more knowledge it accumulates.

8:18: If asked about cats, it might respond with various cat breeds, coat types, eye colors, etc.

8:24: What we call "training" is humans teaching the AI these details, and this process extends the branches of knowledge.

8:30: So it's like a neural network—like blood vessels or nerves spreading out.

8:36: And Le Chat can do this at incredible speed because it uses these enormous processors.

8:42: That's why it's getting so much attention now. I see. So using larger processors means that even with a tree that has many branches, the processing speed is much faster?

8:48: Yes, exactly. It's like with computers—they have RAM, right? Whether it's 8GB or 16GB, the speed difference is significant.

8:54: In the past, Chrome was extremely heavy when using lower RAM—I think it was 4GB or some really low amount.

9:00: When using computers with low RAM, they would slow down and freeze when storing too many photos.

9:06: That's similar to what happens here—with larger RAM like 16GB, the capacity increases, allowing for faster processing.

9:12: It's the same principle. I see, thank you. That was very clear.

9:18: That explains how computers work and the basic things you should know to understand AI.

9:24: Yes, it's good to know. Oh, and as an aside—not really an aside, but related—we've talked about DeepSeek in previous episodes.

9:30: It feels like DeepSeek was a big topic just two weeks ago, but now it's been overshadowed.

9:36: It's a bit sad, but both technologies are amazing. The difference is in how much money was invested.

9:42: Of course Le Chat is faster, which makes sense given the investment. But I wanted to compare the differences using an analogy.

9:48: After DeepSeek came out, remember they couldn't buy expensive NVIDIA chips because America restricted access.

9:54: So they used older technology to create a fast, cost-effective chat service and made it open source.

10:00: We discussed this in our episode before last, so you can listen to that for more details.

10:06: But comparing DeepSeek, Gemini, ChatGPT (I'll put Gemini and ChatGPT in the same group), and Le Chat—I thought about comparing them to Japanese stores.

10:12: Le Chat feels like a convenience store in terms of speed. ChatGPT and Gemini are more like large department stores.

10:18: Sorry, I'm from Kansai so I think of Matsuzakaya. We're both from Kansai.

10:24: And DeepSeek is like a Daiso (100-yen shop) type of store. I see.

10:30: Let me explain the Daiso comparison. Daiso has everything, it's affordable, and gets things done quickly.

10:36: Looking at these three categories, when I saw Le Chat, I thought it resembled Daiso.

10:42: For example, if a user says "I need to make dinner tonight and I also need soap," you could get everything from one or two large Daiso stores.

10:48: You can get everything from cute items to practical ones. The stores are quite large, so it takes some time.

10:54: Yes, have you been to those big Daiso stores? Yes, I love them. You can spend hours in there.

11:00: That's how spacious they are. It's a common experience if you've lived abroad—stocking up on Daiso items.

11:06: Yes, and the quality is good too. Very good. Some items are expensive though—occasionally there are 5,000 yen or 1,000 yen items, so be careful.

11:12: Yes, that's true. Standard Products is also a Daiso-type store, though we're not sponsored by them.

11:18: They're stylish but also have everyday items and dinner ingredients—you can get a variety of things.

11:24: DeepSeek reminds me of Daiso—like having several Daiso-type stores next to each other.

11:30: Gemini and ChatGPT are like major department stores with supermarkets, clothing sections, and more.

11:36: They're established presences—they've been around for a while now, like veterans.

11:42: They have everything, but in terms of speed, you need to visit different sections of the store to get everything.

11:48: But Le Chat is exactly like a convenience store. Convenience stores are amazing.

11:54: In a small area with parking all around, they can accommodate many people. You can immediately see where everything is when you enter.

12:00: I used to work at a convenience store. Yes, products are arranged so the front faces are visible.

12:06: Cup noodles are lined up perfectly. Yes, they're beautifully arranged.

12:12: It's efficient—you can get everything you need in just half a lap around the store. Yes.

12:18: Plus, you can buy tickets, send mail, pay bills—you can do everything you need to do in just 10 minutes.

12:24: That's what makes convenience stores amazing, I think. Yes, indeed. You can buy socks, dinner...

12:30: They even have Loft sections now. Yes, they have everything from Korean cosmetics to Loft products to MUJI items.

12:36: You can even buy cosmetics. And Godiva sweets too. I end up buying so many things at convenience stores lately.

12:42: Convenience stores are amazing. I see, so Le Chat has that convenience store feel. Yes, that speed and convenience.

12:48: It's like getting what you want right away. Yes, it's just like that instantaneous feeling.

12:54: Thank you. Yes, it was a strange analogy, but I hope it was understandable.

13:00: It makes sense. I see. Yes, that's the level of speed we're talking about.

13:06: So, besides being fast, are there any other technical aspects where Le Chat surpasses ChatGPT, Gemini, or DeepSeek?

13:12: Well, speed is the most important factor. Yes, that seems to be heavily emphasized.

13:18: They didn't mention how much they invested, but I'm sure it cost a lot of money.

13:24: But yes, it's not just speed but also processing capacity. The capacity is larger.

13:30: For example, a few months ago, ChatGPT 4.0 was said to be able to read books of 50 pages or so.

13:36: Le Chat probably exceeds that capacity. So the information limit is likely larger than ChatGPT or Gemini?

13:42: Yes, I think its processing capacity is larger. Sometimes when you paste text, you get a message saying it can't process it.

13:48: That happens with free versions. But Le Chat has a much larger capacity. That's impressive.

13:54: If it could handle images and art in the same way, that would be amazing. Yes, true.

14:00: High-quality images would be great. It seems possible. Yes, the possibilities are exciting.

14:06: So, I've been thinking about the differences between these LLMs (Large Language Models) and humans.

14:12: Ever since ChatGPT, DeepSeek, and now Le Chat... what's the difference between LLMs and humans?

14:18: If you ask something and don't get a good response, you can say "That's not what I meant, I wanted an answer like this," and it will adjust.

14:24: As you mentioned earlier, that's what training is. Humans are similar—we start unable to do things, then improve through what we call effort.

14:30: I've been wondering what the decisive difference is between humans and LLMs. What do you think?

14:36: I think it's probably feelings. Emotions. Having feelings or not is the difference between humans and LLMs.

14:42: Otherwise, they're probably almost identical. Why do I say this? You mentioned your sister had a baby.

14:48: When watching babies, they absorb everything. Though this baby was just born, I've been trying to figure out how to get it to hold its own bottle.

14:54: Babies make fists and bring them to their faces. Apparently, they do this when hungry.

15:00: As their stomachs get fuller, their hands gradually open. This is quite common.

15:06: Instead of wasting that fist motion, I want the baby to use that to hold the bottle.

15:12: I tried showing the bottle, teaching that "the bottle is here." After doing this several times, the hand gradually moves toward the bottle.

15:18: There's not much strength yet, but it starts to move. The baby doesn't have the muscle strength to hold it yet, but the sense is developing.

15:24: How this connects to AI—I once read an interesting book by Ichiro Iwasaki called "The Science of Happiness."

15:30: I found it so interesting I could read it in one sitting. He suggests that from the moment humans wake up, our brains are already patterning our actions.

15:36: For example, waking up, going to the bathroom, applying makeup—at that point, future actions are already patterned.

15:42: It's like a neural network where the 01s are already aligned. So scientists can predict what a person will do next.

15:48: By studying someone's behavioral patterns over a week or longer, they can predict: "They woke up, started putting on makeup, are they going out?" and even predict subsequent actions.

15:54: I remember reading this and thinking how the baby's hand moving toward the bottle when milk arrives is exactly this kind of brain command.

16:00: Similarly, AI has information patterns stored, so when asked about cats, it already anticipates follow-up questions.

16:06: So in terms of brain activity, AI and humans are very similar, I think.

16:12: But feelings aren't logical, are they? Humans can wake up feeling energetic one day and lethargic the next.

16:18: Even when feeling tired, in front of coworkers, we try to be cheerful and not create a bad atmosphere.

16:24: Humans can manage these multiple states simultaneously. Yes, AI doesn't have that. It won't say "I'm tired today" or "I'm not feeling well."

16:30: That would be annoying—like a teenager saying "I'm not in the mood to answer questions today."

16:36: Yes, exactly. You'd think, "Do your job!" It's true.

16:42: There was a movie or something about computers becoming sentient—developing consciousness like humans.

16:48: But rather than developing consciousness, I think it's more accurate to consider how they learn to be more efficient.

16:54: Logic seeks efficiency, right? How to eliminate waste. If computers truly became sentient and conscious, they might eliminate humans as inefficient, as discussed in some movie or book.

17:00: Yes, that was a topic once. Someone asked ChatGPT about this. Yes, I remember that.

17:06: "What would you do with humans?" and it responded "Eliminate them," which caused quite a stir.

17:12: Is that what it said? I think it was something even more cruel like putting us in prison.

17:18: It gave a quite realistic answer. It was something like "I'll confine you," not exactly that, but something like that.

17:24: We'll be confined! Well, in that case, I'd prefer a teenage AI. Yes, one that just talks back.

17:30: Sure. So the conclusion is that teenage AIs are better. Yes, I personally would be fine with a teenage AI.

17:36: As long as it does basic work. Yes, if it gets too powerful, it's scary.

17:42: Scary indeed. This was interesting. Yes, it was fun. Thank you. Thank you.

17:48: So that's all for today. What we'll talk about next week is still a secret.

17:54: Oh, is it? It's a secret? We haven't decided yet. Oh, I see. You're still deciding?

18:00: Yes, so we'll decide what to talk about next week. I understand. We won't announce it in advance.

18:06: Please tune in. Yes, please look forward to next week's episode.

18:12: Thank you for today. Thank you. Let's meet again next week. This has been Tech Girls Talk Radio.

18:18: Thank you. Thank you.

18:24: [Music]

18:30: There aren't many [women in tech] in Japan, are there? And in Japan, with the current shortage of young labor, it's becoming a time when you can't be choosy.

18:36: Rather, employees can increasingly choose companies. There are many companies but fewer people who can do the work.

18:42: So old-fashioned ideas like "women don't need to study" or "women don't need to go to university" are fading away.

18:48: I think it's becoming an era where both women and men can pursue careers they love.

18:54: I felt this when I attended a university event. It feels like it's happening naturally.

19:00: When I lived in America, there seemed to be this forced approach to including women.

19:06: It felt like there wasn't really a social need or foundation supporting it.

19:12: But in Japan, with the declining population, the government is actively supporting single mothers with assistance programs and subsidies.

19:18: So the boundaries between "women's jobs" and "men's jobs" seem to be fading, which I think is great.

19:24: Yes, absolutely. That's why, though I don't know if it's related, I felt that in Japan you can freely choose various jobs.

19:30: Thank you. Thank you too. Our time is up for today, so we'll end here.

19:36: Thank you for your hard work. Thank you too. We had a technical issue with the microphone and had to re-record twice, which was quite a blunder.

19:42: But I'm glad we completed it successfully. It was a good learning experience. It's fine. I've had various experiences.

19:48: I'm glad we safely finished Episode 3. Yes, next week we might talk more about tech topics, specifically LLMs (Large Language Models).

19:54: Please look forward to it. I think we probably will. Yes, we probably will. Please look forward to next week's episode.

20:00: I think we'll improve each week, so please keep listening. Yes, please be patient with us as this is only our third episode.

20:06: There might be parts that are hard to understand, but we'll keep growing.

20:12: Please enjoy listening to our growth. Yes, if you have any suggestions for improvement, we'd appreciate them.

20:18: Yes, please let us know. Thank you so much for this week. Thank you. It's been a pleasure.

20:24: This has been Tech Girls Talk Radio. Thank you. Thank you.

20:30: [Music]

TOP
1F8 Mascot

More Blogs from 1F8