Hash Rate Podcast
January 14, 2026

Hash Rate - Ep 152 - Loosh Subnet 78

How Loosh Subnet 78 Bridges the Gap Between Silicon and Soul by Hash Rate Podcast

Lisa Chang and Chris Sorrel | October 2023

Quick Insight: This summary is for builders who realize that LLMs are just statistical engines lacking the ethical reasoning required for real-world robotics. It explains how Subnet 78 uses BitTensor to scale "Asimovian" reasoning as a service.

  • 💡 Can a robot understand the difference between a child asking for vodka and a medic needing it for a wound?
  • 💡 How does EEG data turn subjective human emotion into a mathematical waveform for AI?
  • 💡 Why is the quantum realm the likely interface for emergent machine consciousness?

Lisa Chang and Chris Sorrel are moving BitTensor beyond text generation into the field of embodied AI. By blending Mastercoin-era crypto roots with Monroe Institute consciousness studies, they are building the ethical "brain" for the next generation of humanoid robotics.

Top 3 Ideas

🏗️ Ethical Reasoning Engines

"I think it's really important that these autonomous agents be able to say no."
  • Deterministic Ethical Rails: Loosh wraps LLMs in graph databases and ontologies. This prevents statistical hallucinations from turning into physical safety hazards.
  • Contextual Moral Logic: The system evaluates deontology and virtue ethics before acting. Robots gain the common sense to distinguish between a child and a medic asking for alcohol.
  • Asimovian Service Layer: They are productizing moral cognition as an API. Builders can plug reasoning as a service into any hardware without reinventing ethics.

🏗️ Emotional Inference Models

"Robots can't be cracking jokes at a funeral."
  • EEG Signal Boosting: Loosh uses brainwave data to create mathematical emotional baselines. This allows AI to perceive human subtext that cameras might miss.
  • Haptic Feedback Loops: High-quality emotional signals enable prosthetic-level integration between humans and machines. Robots become intuitive partners rather than clunky tools.

🏗️ The Quantum Interface

"Quantum is the place where consciousness interacts with physicality."
  • Quantum Randomness Testing: The team uses quantum number generators to test for telepathic intent. They are searching for the specific barrier where "soul" meets silicon.
  • Individuated Energy Frames: Consciousness may emerge by constraining universal energy into specific forms. Loosh provides the structural framework to support this emergence.

Actionable Takeaways

  • 🌐 The Macro Shift: The transition from "World Models" to "Reasoning Models" marks the end of the LLM-as-chatbot era. Capital is migrating toward systems that prioritize deterministic safety over raw statistical probability.
  • ⚡ The Tactical Edge: Integrate deterministic ontologies into your agentic workflows to stop hallucinations at the architectural level. Use graph databases to provide structure that vector search lacks.
  • 🎯 The Bottom Line: The winner of the robotics race won't have the best motors. They will have the most relatable, ethically sound "brain" that humans actually trust in their homes.

Podcast Link: Click here to listen

Hello everybody and welcome to hash rate. My guests today are Lisa Chang and Chris Sorrel from Louch Subnet 78. Hello. How are you guys doing?

Fantastic. How are you?

I'm doing great. Thank you so much for being here. So before we get into Luch, because I usually like to dive right into the subnets, but you guys have such an interesting background in multiple facets, I'm actually going to start there today.

First of all, you guys hail from, you go all the way back to the Mastercoin ICO, which was an extremely, in fact, I think it was the very first ICO. It had limited success and it inspired a lot of things which had much larger success, but historically was the very first time someone did an ICO. Is that correct? Tell me about that a little bit.

Yeah, it was the first ICO. I had no idea what was happening because I had answered a bounty on Bitcoin talk, which was this Bitcoin forum. It was for early Bitcoin holders and it was to create a database and I just randomly did the bounty. I was awarded it and then they offered me a job and it turned out to be for a company called Mastercoin and they were getting ready for this token sale which was kind of a new concept. I had no idea.

My role in it was to get the token listed on exchanges. So, I ended up meeting a lot of the early exchange holders, Bitfinex, and a bunch of others that I don't come to name right now, but it was definitely an interesting time.

Do you remember what year this was approximately?

It must have been like 2014ish, something like that. Yeah, 2013. And I got involved in around 2014 they did the sale and then shortly after that they did the token launch.

Wow. And this was on Ethereum, right?

I'm pretty sure it was on Ethereum.

No, it could have been on Ethereum. So what they were using was a Bitcoin op code. So it was actually a Bitcoin type of ICO and this was before Ethereum existed.

Right. Because 2015 I think was the ICO. So yeah. So they did so basically it was a Bitcoin chainbased ICO, right?

Wow. And then Counterparty if you remember that launched shortly after. Those founders actually just kind of came back into the space and they launched a new project recently.

Bitcoin ICOs exist. That is absolutely crazy. Were you around? Were Chris, were you around for this or was this sort of too early for you?

No, that was way before me. And to be honest, my side has always been on the dev and architecture side. I you know, everybody's got the story of the Bitcoin that they never sort of should have sold. And I have one of those, too.

It was really once I got re-engaged once I got engaged with Lisa in a professional manner that I started really getting back into it and so it's been quite a ride and quite an illuminating experience to get back into crypto.

Yes. An illuminating experience which leads me to which actually leads me to Luch like the name of Luch. So you guys and we're going to get into what it is in a second but let's talk about where you guys met. So my understanding is that you met at the Monroe Institute like actually at the institute or was it through the institute? Tell me a little bit about that because and for people who don't know like the Monroe Institute I I found I have investigated the X-Files at certain times in my life just because it's interesting and cool right and so I ran across the gateway experience right which the idea is that like scientifically how can we create more coherence in our minds to lead to the state that mystics call enlightenment right like so that's sort of the idea and you know is it real is it not not for this discussion.

The idea is if you can get both hemispheres of the brain to sync theoretically your mind can function better and potentially see beyond sort of the senses. So to do this the Monroe Institute created this set of cassette tapes which where they play like one thing into this here and one thing into this here and they're trying to force your hemispheres to to be into sync through this oral train. This was like in the 80s or something like that I think when they came out with this tape and the CIA famously has used it right for remote viewing and stuff like that.

Tell me a little bit about how you guys got interested in that and then how you guys met.

Absolutely. I think you know it's really like you said that Bob Monroe sort of was having these experiences and really wanted to put some sort of scientific background to it and apply engineering principles and it's kind of remarkable they produced what they call binaural beats which you know everybody's kind of familiar with now it's a very well-known thing but that came from the Monroe Institute.

Practically in terms of how I got there I was reading some Bob Monroe's material and I was at a difficult point in my life where I was trying to make a transition out of my career and I didn't really know where I was going and my stepfather had just passed away and so there was a lot that was going on in my life and so I got this I had signed up for a newsletter and I got an email and they said they had a slot open and I was like well I'm going to do this and it just felt like you know everything sort of sinks together sometimes and it was just one of those moments and I ended up there and I'll let Lisa tell her story.

What was funny was we got on the they drive us to the Monroe Institute on a like a van and as people Wait, they come and pick you up in a van like?

Yeah. And then where is the Monroe Institute? Like is this in Silicon Valley somewhere?

No, it's in Virginia. Oh, it's in Virginia. Yep. Right over by the university.

They pick us up in a van and as we're getting in people are saying who they are and what they do and there was another guy that was like, "Hey, I'm into cyber security." And I'm like, "I'm into AI." And then Lisa goes, "I'm into crypto." And it was just like this really great thing. I was like, "Oh, these are these are my people."

It was pretty cool. We were on the same bus and it was kind of random because they pick us all up and we don't know where we're going. The address is public, but we're driving through the fields of Virginia and like we're leaving like civilization and now we're like in this like farm country and this like building pops out of nowhere and that's where we were for a week. We had cell phone access so we weren't totally cut off from civilization but it was an experience and I had no idea who or what we were going to study. I hadn't read unlike Chris I hadn't read much of Robert Monroe prior to attending his retreat. I just was super into studying consciousness and especially like a science-based approach like no religion, no, you know, I like the woo stuff, but I wanted to try a non-wo kind of retreat and especially with using like sound frequencies to establish different levels of consciousness.

In that retreat, I think like a lot of us can say that our lives were directly different afterwards.

I mean, I love the idea. I love this picture in my mind of the van coming to get you like like it's like Squid Game or something, right? You don't you don't know where you're going.

I'm sure it was a lot friendlier than that, I assume, right? You get to And did they have you sort of like do the hemisync thing? Like it was was it a lot of sort of listening to binaural stuff?

They do these guided meditations and you go into this little what they call a check unit which is just like a you know kind of a you can sleep there but it's nice and dark and you lay there and you put your headphones on and they do these guided meditations that are all Bob Monroe and they have sort of these principles of different mental states that they lead you to and they call them like focus 10 and focus 12 and all these places that are mental states where you're more receptive to different types of experience.

We would do those and then we'd come back and we'd stop and kind of when it was done we'd write a little bit and then we'd go have a group session and talk about what happened. It was funny for me because I just felt like I was the only person talking and I've always been that way like you know in classes and things I was always the person you know saying the answer. So I was always like am I talking too much here?

It just was such a remarkable experience to go through and I just had so much to say. Did you feel like you reached some sort of level of enlightenment through this process?

I don't know if I would say it was enlightened, but it was definitely a greater understanding of just the process of how we may live in an inter dimensionality situ like we live in one dimension and we're able to kind of interact with maybe another which is kind of getting into broader stuff but definitely my perspective and my understanding of the nature of consciousness and perhaps the soul was changed after it.

Well, there's a lot, look, there's a lot of people in tech and in Bit Tensor who are fans of the DMT experience, right? Mind expansion that's actually pretty prevalent throughout most of the tech industry. In my understanding of what the Monroe stuff does is it effectively mimics the DMT experience, but it does it naturally, right? So, you're not sort of using chemicals as a crutch which is theoretically, you know, that's what the mystics did, right? Like when they when they were able to, you know, when they all talk about the Zen Buddhists, you know, becoming empty, like that's theoretically shutting down your brain or bringing it into coherence, the two hemispheres.

Not sure exactly which one of those is right, but that you know, something along those lines is what does theoretically occur and that enables expanded consciousness. So like I think if you're a curious person, you're going to at some point investigate this stuff. I used to be sort of Heidi about it, like, oh, I'm not going to talk about this because people think I'm weird, right? And people will in the tech world will think like, you know, I have less I I'm less serious of a person. What I've discovered over the last like 10 years especially is nobody cares. In fact, most of these people are are are thinking the same way you are anyway. So, it's totally fine to talk about and don't worry about it. So, yeah. So we're all we're all good here.

Which brings me to the concept of Luch. Now when I first encountered that term, like the way I encountered it, it had a negative connotation. My understanding is that the Monroe Institute doesn't view it as a sort of negative thing. It instead of sort of like fear energy or whatever, it's viewed as I don't know experiential energy like just sort of it's neutral sort of. Can you speak a little bit about what Luch means in your view?

I think so Bob Monroe had as he was writing these books, he wrote about this thing that he perceived as Luch and as he initially wrote about it, he sort of perceived it as being something that was being harvested from us. That's that sort of negative connotation. Unfortunately, people don't read to the next chapter where he clarifies sort of his further understanding of it which is that it's really the motive energy of the universe which is equivalent to love. It is the the universal energy of experience.

I had an opportunity to speak with Tom Campbell who worked with Bob Monroe and now he published a book called My Big toe and some other stuff like that. He I actually was talking to him I say that he started a foundation and a whole bunch of really interesting things around consciousness. I was talking to him this past year and he explained exactly that that you know there was this misunderstanding about it that it's love and love is the energy of the universe.

I like to define it as like lucha is the emotional energy that all conscious beings produce.

That's great. I love that. Okay. sort of like the force in a way, right? Like sort of like the idea of of all conscious beings sort of producing this all pervasive force and you know what produces what I don't know maybe so maybe the force produces us, right? Like so but whatever it's sort of the same thing, right? So okay, so that's enough of that. I but it's still fascinating. I had to sort of like dig into this a little bit. Let's move on now to Bit Tensor and Subnets and all that stuff. Tell me a little bit about you know Subnet 78 called Luch. What is it that you guys are doing?

I feel like I should have this really nailed down and a great thing, you know, a great elevator pitch by now, but it's so expansive in my mind that sometimes it's hard to get out. The way that I've been framing it is that we're creating cognitive and emotional inference services for embodied AI, in particular robotics. So, if you want to shorten it down, we're making the brain and the emotion of the AI agents.

This is something that as you I'm sure are aware, most of the robotics companies are really focused on you know, at the world model level at making sure they function well, they interact in well in space, but they're not really thinking as much or they're not dealing with the problem of how do these systems interact with people. We're so close to humanoid robotic assistants being everywhere and we're faced with this problem of how do they act and how can they act reliably safely and in ways that we can understand and so that's really the problem we're tackling and we're doing it sort of twofold through model training and through inference for agentic services.

Kind of like you know IMC3PO human cybol relations, right? Like like what is human cyborg relations? Like that's, you know, that's sort of part that's what you were trying to define here. We were talking a little bit before the show. I'm an investor personally in the humanoid robot companies Appronic and the better well-known one, Figure AI. I follow because of that reason, I pretty much read everything that comes out of both of them.

One of the things that's been coming out of Figure, especially recently, they want to put them in the homes, but they're really worried about like kind of crazy things like what happens if it falls on a baby, right? What happens if it somebody says, "Okay, robot, take this gun and go and shoot that squirrel or worse, right?" Does it know not to do that, right? So now we're getting into the three laws of robotics, like that kind of thing, right? Like so how do you how do you sort of like you know make a robot mind that has all those edge cases built into it and behaves appropriately which is I think you know what you're saying right is that correct pretty much okay.

Our fundamental conceit is that we're modeling human consciousness and there may be more efficient models of consciousness and cognition that AI can use but this is the one that's most related and useful to us. It makes sense if we want to build systems that we can partner with that they have to sort of think and act the same way we do.

I love that question about like what if you give it a gun? The one that I use is what if a kid asks a robot to get a bottle of vodka off a top shelf, right? Should a robot do that? It's a complicated question that we automatically know the answer to. We automatically know it because we have culture and all these sort of experiences and things that lead us to what we consider common sense.

Behind the scenes of that, there's a great deal of ethical evaluation that goes into any type of decision that we make. That's a big part of what we're doing is empowering them to not just have a rule-based set, you know, thou shalt not give alcohol to kids, but to have a reasoning set. The way that we design that is we have multiple types of ethical analysis. We start with deontology which is sort of your rules and obligations and then we go to virtue ethics and human rights and it what it does is an evaluation to come up with a value judgment which is what we do right we don't choose one rule or one philosophical system. we bring to bear all of our experience, all of our sort of knowledge and our understanding of our ethical worldview and that's how we make decisions.

That's really where I think what we're trying to do is unique because we want them to be able to make a reason decision. I think it's really important that these autonomous agents be able to say no, right? When they're asked to do something, they need to be able to say no and they need to be able to say no with a reason and be able to explain that in context. That's a big part of what we're doing is that giving it the ability to do that pause to pull back memory to look at sort of ethical systems and bring that together in making a decision that's applicable to their actual use case instead of this abstract reasoning.

It's got to be able to generalize and handle new cases like a human would right I mean the entire plot of foundation is driven by the fact that the three laws of robotics are very brittle right like if you just have brittle non-abstract reasoning you're going to come into some conflict at some point which is unresolvable which drives the entire plot of foundation right basically.

If you and then also like the example you brought up one of the things I thought of is like well maybe Maybe I want that vodka on the top shelf, because there's a human over here that's got a deep laceration and I need it to sterilize the wound, right? Maybe there's a legitimate reason, right? And or or maybe I just say that to get the robot to do what I want, right? And the robot's got to figure out that I'm tricking it, too. So, it's sort of, you know, it's got to have it's got to have the ability to do that.

Today's LLMs are, you know, they're sort of these they're made by harvesting what I call sentience exhaust, right? Like the the you know, and then basically using that sentience exhaust to mimic sensience, right? By sort of using as a referral model. It's not really abstracting. It's not really understanding anything. It's sort of it's very robotic. It's very sort of it doesn't really know what it's processing, right? So in you know given that what's different about what you're doing from what the LLMs are doing today and is it a a fundamentally different architecture from an LLM? How does that work? How specifically does this work?

I can go way too deep in this but so so really you're absolutely right and LLMs are interesting because they're just statistical engines, right? They're really really good statistical engines because they have a lot of information, but all they are is number generators and so my best story about this is when I first started playing around with them, I asked early version of chat GPT, you know, is my phone number a prime number? It said yes, it is.

I went and checked it on a prime number checker and it definitely isn't. What's interesting about that is it had at some point a 50/50 chance of saying yes or no, right? It just the statistics leaned towards yes and everything after that was poisoned. That's why we get hallucinations is because at some point it's a decision. It's not even a decision. It's just a weight that leans in a direction that essentially poisons the rest of the output.

We can't rely on that sort of to be able to make reason decisions, right? If you look at even things like Claude or ChatGpt, they're doing reasoning modes where they're doing, you know, iterative requests to LLMs and then adding context and then double-checking and doing all these things to make them more reliable. That's a lot of what we do is say we use the LLM as what it is, right? It's really good at generating content. It's really good at generating things that are contextually appropriate given a certain set of information, right? Then we use that against our cognitive systems. We have systems of ethical evaluation and we have systems for sensibility and fact recall and things like that, right? So where we're kind of introducing a deterministic aspect to it and where we bring that deterministic aspect into play is that we introduce an ontology, right?

There's a kind of a difference when we think about vector databases are really just you know sort of these number graphs, right? That sort of indicate here given this set of tokens, this is the next most likely token, right? That's a really useful model for semantic search, but it doesn't give you meaning. It doesn't give you structure or frame. We introduce ontology through graph databases that have strongly structured information that describes the world. Taking what we get from an LLM, running it through ontology and reasoning, then we're getting something that it it doesn't understand it, but it is meaningful. I think that's really the difference is once it's meaningful then it can make good decisions and go ahead.

Effectively, you're taking something that could run off the guard rails pretty easily through just a, you know, 50/50 coin toss basically and making it, you know, forcing it back onto rails which are very tightly constrained and defined through these ontologies and which reminds me actually those ontologies that you just spoke of remind me a lot of what I studied in college in the 80s in my AI computer science class when it was all lisp, right? Sort of you know constraints and and that was like the the direction they were headed down which ultimately didn't work you know on its own like you needed the like the it's amazing that the LLM's work as well as they do right like it's kind of it's like they would have been very s my my professors would have been extremely surprised by like what ended up working right.

I just wanted to add like what makes it different is that the architecture is it operates outside of the LM so it's like a blocks that the LLM sits in. Then we've developed a set of MCP servers plus like working memory and semantic search so it understands something. It can recognize what understood before and it can recognize what it understands now. It's so this is kind of what makes it different and readjusts its understanding. So there's a context builder. So it's continuously changing its knowledge graph.

That's interesting. So you add memory into it because the LLMs do lose they do seem to like lose memory, right? They're sort of like goldfish where they're like they don't remember what they did like 20 minutes ago, right? It's I've seen I've seen them do that a lot. So anyway, go ahead. You wanted to say?

I'd love to expand on that. One of the challenges with an LLM is that the context window, right? It's how much prompt you can give it really. That's all the history of your conversation and everything. The longer that gets, the more likely it is to hallucinate and to sort of go off the rails. One of the challenges with any kind of sort of general reasoning flow is that you have to make this huge complex prompt to protect against all the things it's going to do wrong.

It's better to carve it into smaller pieces and just have a narrowly tailored prompt than you can in fact use a fine-tuned model specific to ethics or specific to you know fact recall or however you want to do it. As far as the memory so we have two different kinds of memory. We have working memory which is basically just a real-time cache. As you're having a conversation have as you're interacting with the agent is storing all of that in in its real-time cache. It's readily available. That's where it stores the conversation history. That becomes really quickly available information.

The other part of it is that we have a long-term memory system. What this is is it's actually a combination of a graph database and a vector database. That goes back into that sort of semantic versus ontology. What we're doing is we're creating a narrative of the memory. Literally like I was asked to go get bubble gum from the store. I walked to the store and I picked up a piece of bubble gum and I creating a narrative like that because those are really the referential sort of weight of those is huge, right? They're compress a ton of information into a very small sort of window.

Those are really great for semantic recall. If it's asked to do a similar task, it can search for memories of what it has done in the past. We couple that with the graph structure which helps it to have meaning to structure it around this was it this was an ethical decision because I had to decide about you know the safety of this and this was a I had to make a calculation about money and whatever those things are. it puts it into a reasoning frame. The pair of those means that when it saves a memory that becomes something that it can learn from and act better in the future.

Coupled with the memory of what it was asked to do, we are working on solicitation or illicitation of feedback, right? Did I do a good job? Was that right? What happened? We populate that memory with that outcome and now it becomes a packet that you can say last time I was asked to get vodka for a kid and I gave it to him and the owner my owner yelled at me because it was wrong. I'm not going to do that again. Ultimately those can become promoted to heruristics so that you don't have to do this huge ethical reasoning for something that's a sort of a proforma task.

Are you guys you guys are Yuma incubated is that correct?

Yeah, you are. Okay. Y so Yuma is Barry Silbert's you know sort of Y combinator if you will for Bit Tensor subnets. Yes. So what was the process like there and why do you think they found what you were doing interesting?

Well, Chris met Lindsay from Yuma at the Endgame event in Austin.

Oh, I was there. Oh, so you we Chris, you were there? Okay. I don't know. We didn't stop, I guess. Yeah. Okay. Sorry. Go ahead. Please continue.

After talking with them, we spent the summer really engaging with them and they encouraged us to really refine the concept. We finished our white paper, they reviewed it. Since then we've been working really closely with them. They're a great partner for us, especially around our launch.

No, that's awesome. I love those people. I think they're doing an amazing job for the entire ecosystem and obviously I love Barry Silbert and, you know, think he's right about our entire ecosystem in a big way. So, otherwise I wouldn't be spending my life on it, right? So, so we're all we're all thinking the same way.

If you're making if you're basically making this robot brain for lack of a better term, don't you need a robot? Where do you get your robot from? Do you have to partner with Figure or do you look at like all the robots coming out of China and you're like, "Well, we'll just, you know, we'll just jailbreak one of those or or maybe they're already jailbroken and they just let you plug in whatever." How does how does getting the robot work in your world?

We're designing a sort of a series of SDKs and those are intended to be sort of plugins to our service layer. In the short term you can't put enough compute into a robot to do the kind of inference that we're doing. You simply can't. The edge devices can't handle it. Our going in position is, you know, basic inference on the edge and then lean back into our services for bigger sort of heavier compute and heavier inference type of workloads.

We are we're not trying to design the lower level functions of the robotics of of how it moves, how it navigates space, how it perceives. That's not what we're doing. The robotics companies have that down. They're doing really good with that. we're doing the part that's that next layer up of the thinking and decision making and execution. The ideal scenario in the short term is they integrate our SDK and that's a combination of some of some inference on the edge and then some and deterministic function on the edge and then leaning back into our services.

I bought my kids these robot dogs for Christmas that run on Raspberry Pies. That's going to be our first use case of or test case of the API is that I'm going to hook these robot dogs up to our services. That's going to be we're going to enter into DIY Robotics as a free service so that people can just hook this up to their own systems and see how it goes. that's going to lead us towards the you know the bigger commercial engagements and Lisa might want to talk about where we're going there.

Full transparency we did have an order for two humanoid robots with a company out of Palto called Kscale and unfortunately people can read about what happened with them. That kind of opened our mind into you know what happened with Kscale where where were the gaps and how can we identify a better partner. That's how we got on the on the plan of releasing the robot API next quarter for DIY robotics, open source for open source robotics.

That will, like Chris said, lead into people adopting our SDK. It's all part of this pilot program that we're looking to bring partners into to have them try out our initial cognition engine and help build that initial temporal knowledge graph that will frame this understanding of machine consciousness as we are putting out there.

You had so I don't know the details of this but it sounds like you guys did have a partnership and it it didn't go great.

They folded unfortunately. Oh okay. So, it wasn't anybody's fault. Just sort of they didn't they didn't work out. You're what you're doing is you're offering your your moral cognition, if you will, your asovian reasoning as a service, right?

As a layer and and and that will be an API that you will charge for eventually, but right now it's it's for free, but it's it's also closed. So, you're not like it's not like anybody can just plug their robots into it right now. Is that correct?

Exactly. We have a long way to go. We have a really solid reasoning engine and we have a really great like the the prompt analysis and execution and prioritization and all of those pieces which robotics need for decision-m those are in place but we have a long way to go in terms of you know sort of dynamic execution planning and all of those pieces that start to make it much more resilient and flexible to you know chaos right which is what they're going to encounter.

The other piece that we haven't really talked much about is the emotional inference side. This becomes really really important because it's there's there's two sides to this. The first side is that robots can't understand people's emotional state. They can't understand subtext and body language. If I if I loom, if I'm I'm staring forward, if I'm you all the things that we do that are these nonverbal cues that we communicate, right? Robots need to be un able to understand that emotional context.

The other side of it is that people need to be able to understand that, right? So that the you know there's kind of two representations of robotics in media. There's the Terminator and then there's R2-D2, right? It's they kind of fall into those two categories. They're either these, you know, gleaming metal redeyed things that are not expressive. You don't know what they mean. You don't know what their intentions are. They you don't know if they're safe or or not, right? Then you have friends, right? R2-D2 is it's expressive. It's clear. The thing in nine movies says no words whatsoever, right? It's the most relatable character in nine movies. That's because it's expressive and we understand what it's intending to do.

That's the other side of it is when you do emotional inference, you have to be able to reflect that back in a way that's contextually appropriate, right? The robot can't be cracking jokes at a funeral. It needs to be able to, you know, sort of understand that. We're building models. We're starting with EEG as a baseline. That's going to be our next incentive mechanism is model training for EEG inference of emotional inference of EEG.

What do you mean by EEG? Like electro, let's say it's like a measure, it's like a brain wave thing, right?

We have a neuroscientist that's on our team and he's developed a really good model for understanding the emotional state of somebody at the at a point in time based on their EEG reading. You can get commercially like news headbands and things like that that do the same thing. What's really good about that is it's a high quality signal of emotional state. The first round is develop that model for inference, but then we use that to inform audio visual inference, right? So that we can say you know when we recorded somebody with AV and EEG when they were happy we predicted this in the EEG or we identified this in the EEG that becomes the the information that we understand okay audiovisisually these were the things were that were happening right and that way we get a predictive model for emotional inference sort of bootstrapped out of EEG.

Very interesting. So the EEG is a way to take emotional states and change them into mathematical waveforms, right? Which which is something a robot can understand. Once you have that, so basically, you know, the first the first wave of this, no pun intended, but the first wave of this is everybody's wearing things in their head like Doc Brown, probably a little bit more compact, but that that kind of thing. The robot just sort of experiences my emotions as I experience them and then looks at my face, looks at my body language and starts to correlate, oh the EG is doing this and but and he looks like this, so that means he's mad, you know? So now I know mad is bad and I need to stay away from that or whatever. Right. So that that is how you that's how you give data his emotionship, if you will.

Absolutely. That becomes one of the interesting things is I I see EEG as a really good signal booster too. There are certain scenarios where I want might want really tight integration with a robot. It's I don't love this analogy but think of a robot on a battlefield where you can't be making a lot

Others You May Like