How Wearable Tech & AI Read Your Mind (and What to Do About It) | Nita A. Farahany

Nita Farahany

In this fascinating conversation, Jonathan Fields sits down with Nita A. Farahany, one of the world’s leading experts on the ethical and societal implications of cutting-edge neurotechnologies. Farahany, a distinguished professor at Duke University and author of the eye-opening book “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology,” pulls back the curtain on the remarkable brain-computer interfaces emerging from companies like Meta, Apple, and others.

You’ll discover how ordinary devices like headphones, watches, and AR/VR headsets are being enhanced with sensors that can decode your brain activity, thoughts, emotions, and more. Farahany explores the incredible potential benefits for treating neurological and mental health conditions, enhancing focus and productivity, and revolutionizing how we interact with technology.

But she also dives into the risks to our privacy, cognitive liberty, and freedom of thought as corporations and governments potentially gain access to this most personal data. Could this open the door to manipulation, discrimination, or punishment based on our private inner experiences?

Farahany provides a captivating glimpse into the miraculous brain tech heading our way while sounding the alarm about the urgent need to secure legal protections for mental privacy. You’ll walk away seeing the future of neurotechnology differently and understanding why securing our “cognitive liberty” may be one of the great human rights frontiers of our time.

You can find Nita at: Website |Β LinkedIn |Β  Episode Transcript

If you LOVED this episode:

  • You’ll also love the conversations we had with Adam Grant about rethinking.

Check out our offerings & partners:Β 

_____________________________________________________________________________________________________

Episode Transcript:

Nita A. Farahany: [00:00:00] This is like all of the sci-fi kind of scenarios that I’ve played out are about to come true.

Jonathan Fields: [00:00:06] Today we’re joined by Nita Farahany, a world-renowned scholar at the forefront of exploring how cutting edge neurotechnology is reshaping our brains, our minds, our laws and our lives. From her work as a distinguished professor at Duke University to her groundbreaking book, The Battle for Your Brain, she’s tackling the big questions about how emerging brain technologies are influencing not just our mental health, but our fundamental freedoms. This is a conversation you don’t want to miss.

Nita A. Farahany: [00:00:36] We haven’t hit a like button, but our brain lights up like a like button when we see something. And for companies and employers and governments to have access to that information and the closed loop environment that has been created, where not only do they have access to that information, but they can use it to shape the environment that we’re interacting with.

Jonathan Fields: [00:00:56] It’s exciting and terrifying simultaneously. You know.

Nita A. Farahany: [00:00:59] It’s not hard to start to see how this becomes dystopian pretty quickly. If there aren’t any limitations on the kinds of data each of these different entities can have access to. In 2018, I was at a conference where one of the co-founders of Control Labs stood up and he was showcasing the technology. And, you know, he had it in the form of basically a watch on his wrist. And he said, why are we humans such clumsy output devices? We’re incredibly good at taking in information, but we’re really bad at getting it out of our brains. And what if instead of, you know, using these sledgehammer like devices on the ends of our arms, we just think about typing or swiping instead? And that was the aha moment where it was both. The form factor had been solved by integrating brain sensors into everyday devices like a watch, and the functionality was being addressed, and that it was an interface to all of the rest of our technology rather than just a limited application. And I thought, like, that’s it. That’s the pivotal acquisition. I’m going to watch that product because as soon as one of the major tech companies like Apple integrates it into the Apple Watch, all the things I’ve been following forever are going to go mainstream. And, you know, sure enough, that was the pivotal acquisition. It just happened to be meta. Who acquired them a year later?

Jonathan Fields: [00:02:23] Yeah, I mean, this is so interesting, right? I remember back, um, and, um, unfortunately, I joined you in the Chronic Migraine Aura Club. Um, so it’s been a part of my story.

Nita A. Farahany: [00:02:32] Sorry about that, yeah.

Jonathan Fields: [00:02:33] As long as I remember. You know, like so many people, I have tried so many different things, and there have been all sorts of tech and gadgets and things you stuck on your forehead and all the different stuff. Um, but it’s interesting, I think what you’re referencing also in terms of, you know, like one of the really early devices around meditation was this device, I think it was called muse. I don’t know if it’s still around or not where you,

Nita A. Farahany: [00:02:53] Yea, it is

Jonathan Fields: [00:02:54] Put it on. It was sort of like a, like a very simplified neurofeedback type of thing, which would try and kind of tell you when you’re in, when you’re in the zone or not and, and help you try and get back there. Um, but it’s fascinating that you looked at the world and you said, like, okay, we’re not there yet, but I can see where this is going. And when we hit that tipping point where technology actually is able to do what we wanted to do in a much more sort of commoditized and public and ease filled and accessible way, it’s going to be game on, right?

Nita A. Farahany: [00:03:26] That’s exactly right. And, you know, and I think because I’ve been watching it so long to your point, like, I could see what was necessary for the tipping point and then to see the technology finally come into fruition, um, to be like, wow, this is like all of the sci fi kind of scenarios that I’ve played out are about to come true. Um, and, you know, the urgency of it then just became really clear to me.

Jonathan Fields: [00:03:51] So maybe let’s do a little bit of defining here also, because I’m sure we’ll have folks, joining us who are kind of saying, what are you talking about?

Nita A. Farahany: [00:03:59] Like, of course.

Jonathan Fields: [00:04:00] What actually is this? And like we’ve used the phrase neurotechnology a couple of different times here. Um, if you’re explaining this to somebody who’s never met you before at dinner and, you know, how would you actually break down? How would you walk somebody through understanding? What is neurotechnology, the way you talk about it?

Nita A. Farahany: [00:04:17] Yeah. I mean, so the easiest entry point for people is at this point to say, like, how many of you are wearing like a smart device, like a smart watch, like an Apple Watch or a smart ring or a Fitbit or any of these other devices that have a sensor in them that are tracking some aspect of your bodily functions. Right. It could be the number of steps you take per day. It could be your heart rate. Um, and, you know, at least half the people at your dinner table probably have at least one of those devices on or that they’ve, you know, have one in their life that they’ve acquired. Um, and neurotechnology, at least the way I’m talking about it isn’t what Elon Musk is doing, although we can get into that. It’s really taking many of those same devices, whether it’s, you know, AirPods or a watch and adding another sensor. And this is a sensor that tracks your brain activity. And, you know, it’s surprising that those aren’t already in our everyday lives, because people are so used to quantifying so many other aspects of their bodily functions. But all this is, is tracking the brain functions. And most people will say, well, how can you do that? And, you know, it’s most of these are tracking what’s, um, the electrical activity in your brain. So if you’re thinking if you’re listening to this podcast right now, neurons are firing in your brain giving off tiny electrical discharges. Now that’s happening with hundreds of thousands of neurons at the same time. And they give off characteristic patterns that can now be decoded thanks to advances in AI. So you have these different sensors that can pick up that electrical activity. Then you have AI that can interpret what those signals mean, and they can interpret increasingly more. Thanks to advances in AI and thanks to massive training of these models on this electrical activity, there are other sensors that are coming too. But the dominant form that most of these devices are integrating are EEG or electroencephalography sensors.

Jonathan Fields: [00:06:13] My sense is that if we were having this conversation a decade ago, this is the type of stuff where you would need to actually have a cap on your head and have like electrodes, like all over with wires coming out and or potentially going into fMRI or any of the other, sort of like brain oriented scans to get information and generally is only available either through scientific research if you’re part of a study or through medical diagnostic and treatment. Right. But now it’s just it’s part of our consumer products. Yeah.

Nita A. Farahany: [00:06:44] So definitely a decade ago, if you wanted any quality of signal, you would need to have a cap that had, you know, 100 some electrodes on it, and that was applied with gel to your scalp. And it would be a messy process, and you would just track a short period of time while you’re in a doctor’s office, or you go into your point like a giant functional magnetic resonance imaging scanner, and you might be in there for an hour doing an activity, but you’d get a snapshot in time. Those are still going to give you far better signal than any of the consumer devices are, because you’re talking about the ability to look more deeply into the brain or to have, you know, more electrodes covering more of the scalp than these devices. But now, a decade later, you know, you can buy almost, you know, from countless companies now, headphones that have, you know, sensors, maybe eight in each of the cups, the soft cups that go around your ear or earbuds that might just have, you know, four electrodes in each ear and can detect through the ear what’s happening in the electrical activity in your brain, or pick up peripheral nervous system activity.

Nita A. Farahany: [00:07:49] That is like as the signals go from your brain, down your arm to your wrist to pick up motor neuron activity at the muscular junctions, through something like a watch that has an EMG electromyography sensor in them. And these are just consumer based devices. And the major players on the market like meta who acquired that company control labs have started to market these products to, you know, do things like integrate with their Orion augmented reality glasses or, you know, Apple Pro Vision that uses eye tracking to make inferences about brain and mental states. You know, they have a patent to put sensors, brain sensors into their AirPods, and likewise to put brain sensors into the forehead band of their virtual reality devices to pick up that electrical activity. And so it’s no longer something that’s confined to the medical arena. And then, of course, the algorithms have gotten much, much better at being able to extract signal over the noisiness of the signal that it’s otherwise getting. So the quality has improved for what can actually be detected from brain activity.

Jonathan Fields: [00:08:53] I mean, it’s just incredible. You know, and it feels like I, over the last couple of years, probably just allows the interpretation of like, whatever simplified data we’re getting from our devices in a way that, you know, probably not too long ago we would have had really interesting data, but the AI is letting us actually see the utility in it. Like, how do we actually use this to live better and differently?

Nita A. Farahany: [00:09:14] That’s right. And much faster, I think, than anybody expected. You know, when ChatGPT first came out, I reached out to some of the leading neuroscientists and said, like, how are you integrating this already into, you know, language decoding? And they’re like, we’re figuring it out. You know, we don’t totally know yet. It was only a few months later that one of the researchers out of UT Austin published a paper applying GPT one to being able to decode information from functional magnetic resonance imaging scanning sessions, being able to do things that nobody would have expected, like continuous language, entire paragraphs of what people were thinking being decoded from that thanks to advances in GPT one. And when you listen to the conversations at meta that they’ve done at Meta Connect, they talk about the power of being able to have like a large language model on a device. One of the limitations of shipping out these products to a mass market has been that everybody’s brain signals a little bit different, and when you have a consumer product, when it comes right out of the box, it needs to work. And so what they’ve been able to do, thanks to generative AI and having these on device large language models, is to have basic functionality work right out of the box. Like you can use it to go up, down, left, right, but then it learns you right and co-evolves with you and gets better and better at decoding your brain activity by having an on device decoder and classifier. That’s incredible. Right? That wouldn’t have been possible five years ago. And the fact that now these devices can coevolve to get better and better. Decoding brain activity enables a mass market product that couldn’t happen before these advances.

Jonathan Fields: [00:10:57] Break down what you mean by decoding a little bit more here. You know, are we talking about literally like wearing a pair of glasses or a watch or earbuds, earbuds or headphones that can pick up electrical signals in your brain and then literally start to say, oh, this is what you’re thinking, this is what you’re saying. This is what you’re. Or like, maybe we’re not there yet, but like.

Nita A. Farahany: [00:11:16] No.

Jonathan Fields: [00:11:17] I mean, we’re through what decoding actually.

Nita A. Farahany: [00:11:18] Is. But I mean, so, you know, what’s tricky is how much it can decode. Right? And what it means what you mean by thinking. So intentional thought or intentional communication is different than passive thought that you have in your brain. Like if I say I want to send a text message and here’s the sentence I want to send, that’s intentional communication of speech in your brain. And the devices are getting much better at that. Being able to pick up intentional communication of speech and literally what you’re thinking. Right. So if I want to send a text message and type on a virtual keyboard, from what I’m thinking, these models already are starting to be able to do that with pretty amazing accuracy. And then, you know, just think about what a large language model does. It predicts the next word in a sequence. And so, you know, what am I most likely to say. Like open the you know, is it open the window. Open the door, open the wine bottle. You can pick up context to figure out what it is that I’m most likely to say next. And so these models make decoding happen much faster and much more efficiently. You know, just a few years ago, before the models came onto the market, it was already possible to classify major brain states. So you talked about meditation. You know, muse has been trying to classify like, here’s roughly what this brain wave means when you’re, you know, alpha wave dominant or beta wave dominant, different patterns of electrical activity in the brain most likely means you are meditating or you’re focusing or you’re happy or sad.

Nita A. Farahany: [00:12:45] So those big brain states, you know, are already possible to decode with a decent degree of accuracy using these consumer devices. But it’s the ability to decode intentional speech is what has gotten remarkably better. It’s even possible to decode passive thinking, though, so that’s, I think, where things start to get a little bit scarier for people is the idea of like, can it decode just a story that you’re imagining but that you don’t intend to communicate to another person? That’s what that first study I was telling you about that came out of UT Austin. You know, it was just stories that people were imagining, not what they were trying to actually communicate. And that like, they could decode with really high degrees of accuracy using GPT one. We’re already on, you know, for pro, whatever it is, you know, coming like approaching five very soon, which has only gotten better and being able to do so. And so I think what we’re seeing is a move from what had been just general brain state decoding to being able to decode intentional speech. You know, emotions at a much more granular level. And then even just what you’re passively thinking rather than actively thinking. And I’ll just put in one more caveat, which is, you know, that that same researcher, I was having an interesting conversation with him about, like, is this mind reading yet? Um, and, you know, he.

Jonathan Fields: [00:14:14] Said, of course, that’s what I’m thinking right now.

Nita A. Farahany: [00:14:16] Of course. Right. I mean, I saw that, which is why, of course, I brought it up. I was able to decode what you were thinking. Right. So, you know, he said, well, it’s interesting because it doesn’t, you know, like, it’s decoding what I’m imagining, but not what I feel about what I’m imagining. Like the story. Right. And he said, like, if you really think about what mind reading is, it’s more complex than, like, one stream. It’s it’s just like, like the images that you’re, you know, evoking in your mind at the same time as you’re thinking about a story or, um, of the words of a story, and then it’s how you feel about like there’s layers of thought in how you think about it. And I don’t think we’re at either. Even like, I don’t even think that we have an adequate theory of mind to be able to decode all of the complexities of human thought. So it’s still sort of a narrow peering into the brain rather than a complete peering into the brain, but it’s a much more accurate peering into the brain than I think most people think or thought was possible.

Jonathan Fields: [00:15:15] Yeah. I mean, it’s so wild, you know, if I understand it correctly, then this sort of like the active thought process and the passive thought process and the active, I mean, you know, imagine if you’re wearing a pair of glasses where you’re like on the, you know, by your ears, you’ve got some sensors picking up what’s going in your brain, but you’ve also, you know, like, like smart glasses, you’ve got, you know, like some cameras on the front of the glasses and microphones to pick up the environment. Right, so that it’s, it can use that to provide context. So it’s like it’s picking up what’s happening inside your brain. But then it’s got these external sensors of like vision and auditory to pick up environmental cues to help probably filter into an eye to then interpret like the and give context like what is most likely happening here. Fill in the blanks. And I mean that’s going to get more and more accurate as we go.

Nita A. Farahany: [00:16:04] And then I think you raise a great point. Right. Which is we should never really think about these in isolation. Right. I mean, if I’m wearing a VR headset, it is packed with sensors. It’s got cameras that are on my face and on my eyes and on the external environment. You know, it has potentially heart rate sensors that are integrated with the Apple Watch that I’m using. It has EEG sensors, and it’s all of that’s being used at the same time with complex algorithms to interpret what you’re thinking and feeling. And that gives a much more accurate picture, right? It’s where you move in the virtual environment. It’s the email that you brought up and how you’re responding to it, and the kind of contextual ability to take all of that biometric information and all of these other contextual clues means that the power and the accuracy of decoding what you’re thinking and feeling goes up exponentially.

Jonathan Fields: [00:16:52] Yeah. I mean, that’s where I was going with this, right? Because if you add a watch to that, if you add a word now you’ve got like galvanic skin sensors and temperature and heart rate and, you know, pulse ox and these things, those are all like indicators of emotional states. Now it may be it’s not going to be able to distinguish are you anxious or excited. Because a lot of times I guess maybe like it’ll tease it out and maybe it can actually overlay that with what you’re thinking to actually then determine is this physiological response more likely to be anxiety or excitement? Um it’s wild.

Nita A. Farahany: [00:17:30] Mhm. Yeah. But I mean more importantly it’s not even just are you anxious or excited. It’s what are you anxious or excited about. Right. And um, and then how like what are you envisioning and what are you visualizing at the same time? And, um, you know, and then, you know, even more provocatively, and if whoever or whatever entity is monitoring the fact that you’re anxious or excited, is it possible to change how you feel rather than to just, you know, allow you to continue to be anxious and excited?

Jonathan Fields: [00:18:02] And we’ll be right back after a word from our sponsors. I also want to explore, you know, what are the big benefits of being able to do this on a personal level? On a societal level, you know, like what comes to mind immediately is, you know, is there are there benefits on a mental health level of being able to actually harness these tools?

Nita A. Farahany: [00:18:22] Well, I mean, to your point, you’ve tried every device, just like I’ve tried every device with respect to migraine, right? And if you are a chronic migraine sufferer, you understand that it is debilitating, it is painful and it’s frustrating because, you know, every treatment is inadequate in some way and has some side effect to it. And if there’s just some magical thing that I could do to just, you know, stimulate my brain and make the pain go away or to stop migraine in its tracks, I absolutely would. That’s true for a lot of mental health conditions and to neurological conditions. And it’s just an incredibly undertreated area and poorly understood area. So what’s happening, you know, in the space of neurological disease and suffering? Um, up until now has been incredibly poorly characterized and undertreated. So, you know, consider the fact that, like, already, people with epileptic seizure, um, especially those who are treatment resistant, that one of the big breakthroughs of monitoring for brain activity has been the ability to use algorithms to predict minutes to up to an hour before a person suffers from an epileptic seizure. And for somebody who’s treatment resistant, they could take just in time medication or get a just in time alert to be in a place of safety that’s, you know, a game changer.

Nita A. Farahany: [00:19:37] I would like to have that same kind of notification of when I’m going to have a migraine and be able to, you know, kind of address my life accordingly. I don’t know if you got aura or visual disturbances, but I do, and it can be frightening while you’re driving suddenly to have, you know, stars running across your visual space. Likewise, you know that areas like Alzheimer’s disease and Parkinson’s disease, it’s possible to diagnose a lot earlier using neurotechnology. One of the really exciting studies that I saw looked at glioblastoma, which is, you know, one of the most frightening brain cancers for people when they are diagnosed with it, because it’s such a pervasive kind of brain cancer, by the time it’s been diagnosed, it’s almost always, you know, spread throughout the brain. And the tangles of it make it incredibly difficult to fully resect. And so it’s a really lethal diagnosis for many people. There are early, early electrical changes that happen in the brain that are possible to detect attacks with continuous use. And so just like people are tracking heart rate and seeing if they, you know, are suffering from abnormal like arrhythmias or, you know, having an atrial fibrillation or something, and then using that for information, being able to track your brain health in the same way, whether that’s, you know, the earliest electrical changes, be it, you know, suggesting something like glioblastoma or disturbances in sleep that need to be addressed.

Nita A. Farahany: [00:21:02] And sleep is so important to mental health and wellbeing and, you know, staving off all kinds of diseases to even things like, we live in such a distracted world now, right? Like our technology is designed to make it so that we can’t have sustained attention, being able to use technology to see that, to visualize it, and then to be able to use it to try to improve your focus and attention that, you know, there’s a lot of promise here. So I’d say it goes from the most serious conditions for neurological disease and suffering or, you know, with implanted neurotechnology, the reports of, you know, one patient who had their, like, brain to spine reconnected through a device or, you know, patients who have lost the ability to communicate verbally, being able to with implanted neurotechnology do so. And in the future with wearable neurotechnology, do so. I’m really bullish that you can tell there’s so many promising health cases and so many promising just wellbeing cases for it. Um, that’s for me. I’m, you know, net optimistic if we can get it right.

Jonathan Fields: [00:22:09] What about on the mental health side? You know, because what I’m wondering is, you know, we have such an epidemic of depression and anxiety. Um, and and it’s still just so poorly treated and understood. I mean, there are great advances. There’s amazing things happening. But for sure, you know, the prevalence of this and the misery that it causes is a level of suffering that it causes is so pervasive and so deep now. Um, and I’m wondering whether you’re, you see any sort of like form of neurotech out there that might help in that context.

Nita A. Farahany: [00:22:42] Yeah. So I know less on the anxiety front for really good neuro tech devices. Um, I know more on the anxiety side on basic biofeedback. Like there’s some really cool platforms like mightier, which are designed for kids to, um, you know, follow heart rate and to be able to see when their heart rate is getting elevated and then learn ways in playing the game to be able to bring their heart rate down and techniques for being able to decrease the anxiety. I suspect that those platforms will get better and better with neurofeedback, but on the depression side, I know more advances that have been really promising in this regard. So for both and for any mental health issue, I’d say again, they’re poorly characterized. We you know, we group a bunch of things together symptomatically, even though the underlying neural mechanisms might be quite different. And the more data that we have of people using everyday devices in everyday settings, rather than a snapshot in a doctor’s office will give us the ability to learn a lot more. But there’s, you know, two examples already in depression where treatment has improved thanks to neurotechnology. One of them is there was a company that was using neural stimulation, and they’ve been using it for performance enhancement for a while and didn’t have a big market in it. They ended up partnering with a company that has since run it through clinical trials and is marketing it primarily in Europe, called Flow Neuroscience, where it’s been shown to substantially improve depression symptoms by this neuromodulation, which is, I think, pretty cool. And then the second is there’s a company that already has an FDA approved device for Parkinson’s and for essential tremor.

Nita A. Farahany: [00:24:28] What they do is they track neural signals as they go from the brain down the arm to the wrist. And if you think about tremor where like oftentimes it’s one side, they don’t have a lot of other symptoms yet. And early, you know, Parkinson’s or an essential tremor. It’s just a one sided tremor. It picks up that neural signature. And then it sends back an inhibitory response to the brain and stops the tremor. And that same company has been investigating for depression and other mental health conditions, whether they can decode again the precise neural patterns and then send feedback that would interfere with that. We’ve seen in implanted neurotechnology that being truly transformative for patients who suffer from depression, especially intractable depression, and to be able to do it with a wearable device, I think is incredibly exciting. So they have some of those in clinical trial right now for mental health conditions, including depression. And I’m really excited about that because, you know, with essential tremor, part of the reason that drugs don’t work is because they’re incredibly nonspecific. It’s like bathing the entire brain to try to get at one problem right. Whereas you can pick up a precise neural pattern and send a response that interferes with it, and with depression and with other mental health conditions, the more you can precisely track what’s happening and then more precisely target it, rather than bathing the entire brain with a whole bunch of side effects. You know, these are really promising and exciting advances for those areas.

Jonathan Fields: [00:25:57] Yeah, I mean, it sounds incredible. I’m curious, would you consider something like TMS to fall under this umbrella? I know there was, like the first wave of it, and then Stanford, I guess a couple of years ago came out with a newer the same protocol, a different approach to this, to where you can literally use electromagnetic stimulation, um, to have some pretty profound effects on depression.

Nita A. Farahany: [00:26:18] Yeah, absolutely. I don’t know if you ever tried the TMS for for migraine. I did, so they had this, like, big helmet you could get at home and you’d get like a certain number of pulses that you could use. And it was really cumbersome. I was unable to do it. So yes to TMS. Any you know, I classify under neurotechnology really any device that like interfaces directly with the central or peripheral nervous system. It’s a very broad set of technologies, but TMS, transcranial direct current stimulation, um, you know, ect. The electric shock kind of treatments, um, those are all like in many ways, first generation technologies in this space that are getting better and better as as we study and understand, I shouldn’t say we I’m not one of the neuroscientists. I’m studying the neuroscientists, but as neuroscientists really study and understand, um, what the effects of those are and then make them increasingly more precise, because, you know, one of the things that was exciting about transcranial magnetic stimulation is you could look into the brain and figure out a, you know, specific spot and then direct the pulse to it. But for the most part, that had to happen in like a clinic. And, you know, they’re not like sustained treatments that, you know, every time you’re experiencing it, you go in and, you know, get a pulse that happens. Um, and so, you know, with the promise of a lot of these neurotechnologies are is being able to move from the clinic, from the hospital, from the doctor’s office into everyday settings to have portable, wearable technology that offers much more precise and targeted treatment.

Jonathan Fields: [00:27:57] Yeah. I mean, it’s so exciting. Um, one of the things that’s popping into my head around this also, though, is um, is there a. So here’s the analogy. Um, over the last couple of years, whole body MRIs have become sort of like this topic of conversation. You can anyone can show up. You don’t need a script from your doctor. You pay a couple thousand dollars and you get, you know, spend an hour in an MRI and they’ll give you this report. And what a lot of people are doing with them is they’re trying to actually see if they can pick up really early stage cancer in their bodies and the. There are there is a contingent in the medical world that is pushing back aggressively against this, saying you’re going to get a whole bunch of false positives and be a lot of what may get picked up is never going to go anywhere. Nothing’s going to happen to it. So you’re going to start to flood the medical system, which is already overwhelmed with people where they’re using technology in incredible settings, and some of them will detect things which are incredible, and they can stop something before it actually becomes really harmful. But the concern is, um, are you going to if this becomes something that happens at scale, are you going to start to flood the medical provider system with all these people inquiring into it and running, you know, like a metric ton of additional tests and stuff like this, um, that end up actually not being necessary or useful. So I’m curious whether you see that potential with what we’re talking about in Neurotech.

Nita A. Farahany: [00:29:26] Yeah. You know, it’s funny. Doctors seem to hate every step toward personalized medicine. Like, we’re, you know, you have more autonomous patients.

Jonathan Fields: [00:29:35] By the way, that wasn’t me making that argument, because I don’t necessarily agree with that, but it’s an argument that it hurt a number of times.

Nita A. Farahany: [00:29:41] Yeah, I hear it all the time. Right. And I’ve heard it and kind of every different context. So 23 and me, when they launched the direct to consumer genetic testing, you know doctors like now everybody’s going to come in. We don’t have any idea how to interpret any of this. And they’re all going to be convinced that they have a predisposition for X, Y and Z, and it’s going to flood the medical system. Well, I mean, it is true that patients would go in with their, you know, 23 Andme reports and the doctor would say, I have no idea how to interpret that, and I’m not going to order a bunch of tests for you. Um, but the idea that, like, one of the early arguments in that space was that, like, women were going to come in asking for preemptive double mastectomies based on what their 23 Andme report showed? That didn’t happen. But, um, you know, if it was a well-informed patient who went in and said, hey, this showed that I have the breast cancer genes. My mom and my grandmother died of breast cancer. I’d like to run the test because we haven’t tested, you know, with medical grade testing. And if it’s positive, I want to have a double mastectomy. That’s okay. I mean, I actually think that’s a good informed patient. That’s. Yes, it’s an increased number of patients, but it’s essentially an increased number of patients who are saved as opposed to just an increased number of patients. Um, and you know, what hasn’t happened is in each of these other areas, the same arguments were being made about the Apple Watch.

Nita A. Farahany: [00:30:56] Doctors were opposed to, you know, giving patients the capability of having the yeah, the EKGs on the watches. And, you know, there’s going to be abnormal heart rhythms and all kinds of patients who are flooding the system. No, but, you know, there are earlier detection and more detection of heart disease. And there is, you know, very low false positive rate. But there is a some like there is some false positive. And yet there are also positive cases that it’s catching and saving lives. And so, you know, I think the medical system has to be open to a changing world of technology, where patients are more empowered and getting more and more information, where they can make decisions about their own lives, and that they might be more in the driver’s seat of those decisions. The MRI case is a tricky one. And the reason it’s a tricky one is we don’t have routine scanning data for patients to know what those scans mean. Right. And so it’s the validation that’s the problem, which is, um, you know, being able to interpret a scan when there is no baseline set of healthy individuals who have MRI scans to know what, like what’s the rate at which these things progress if they progress at all? Does that finding actually mean anything clinically significant? Um, and so it, you know, it’s harder when you have unvalidated studies like those and we don’t have a basis to be able to study. What I hope is that a lot of those early MRI scans will become part of a data set so that we.

Jonathan Fields: [00:32:26] Can track.

Nita A. Farahany: [00:32:27] Them over time and that we more systematically start like, you know, you’ve got to compare the different machines that they’re on. They’ve got to be capturing the data in a way that could actually allow for long longitudinal studies of them, so that in time they’ll become useful. But, you know, like I haven’t rushed out to get one of those MRIs, largely because we just don’t know what the early scans mean. And a lot of these instances and, you know, my blood tests are fine. All the other indicators of kind of early, you know, the other early indicators and scans that you might do are fine. But my sister did, she went out and got the full body MRI. And, you know, she and her partner both did it and are making lifestyle choices based on what they found.

Jonathan Fields: [00:33:08] So yeah, I mean, it’s such a fascinating moment. I actually know somebody who did it and detected, you know, thought they were completely healthy and nothing was going on. They detected stage one or even less than stage one pancreatic cancer and had a very fast and easy treatment and was basically saying you’re good. Yeah. And you realize that, you know, with especially with cancer, that so often it’s not the fact that it’s there, it’s the stage that it’s caught that. That’s right. Yeah.

Nita A. Farahany: [00:33:33] That’s right. And that’s why my sister did it. She was like, okay, maybe there will be a bunch of, um, you know, kind of unknown, significant findings. But if there is a significant finding that we know of and we see it, I’d rather have the scan and be able to address it. I think that’s a fair point. I think, you know, doctors are worried more about the like the findings of unknown significance rather than the findings of known significance on those scans.

Jonathan Fields: [00:33:56] Something clear? Yeah.

Nita A. Farahany: [00:33:57] Yeah.

Jonathan Fields: [00:33:58] And we’ll be right back after a word from our sponsors. So if we think about Neurotech, then like in all these different use cases and technologies we’re talking about, but it feels like we can also split this into like so there’s on the one side, there’s the sensing element of it. And then for some it seems like they’re starting to be also and sort of like an intervention side of it too. It’s like, so first we pick up something that’s like, ooh, this is a signal that something is a little bit off here. And then we have some technology. Maybe it’s the same, maybe it’s a different tech that can then influence the connection between the brain and the body to actually in some way intervene or help, um, with whatever it is that you’re moving through. Um, as we start to zoom the lens out a bit, you know, all of these technologies and this is something you speak and write about regularly. Um, they tend to not be the type of technologies where you buy it. And then everything that gets sensed and intervene just stays with you, right?

Nita A. Farahany: [00:34:59] Yeah.

Jonathan Fields: [00:35:00] And this is where we start to get into really murky waters. So take me into this.

Nita A. Farahany: [00:35:05] Yeah. So I mean, you know, as we’ve talked through, there are huge benefits to individual access to this technology or even sharing the data with your doctor. Um, you know, the upside potential is really quite enormous. But, you know, in order to capture data about what’s happening in my brain, I have to have that data go from my brain somewhere else. Right? That is something has to detect it, and then it has to communicate with something like my iPhone and an app that’s on my iPhone. And then from there, the big question is what happens next, right? Does a company suddenly have access to that data? Does my employer, if it’s a work issued, you know, phone, have access to brain data? Does the government, in the same way that they have subpoenaed all kinds of, you know, personal data from phones and from Apple Watches and other devices suddenly have the right in a criminal case or in a civil case to, you know, be able to get access to that data and to be able to use it against me in different settings. So, you know, the what we’ve seen in the digital era is that all of these different technologies which we, you know, purportedly receive for free or for really subsidized Subsidised costs aren’t free to us that the product is us, and that the collection of the data and the use of that data and the reselling of that data, or the use of that data to steer our behaviour or to keep us addicted to devices, is primarily how the companies have monetized, you know, the products that they’re selling to us.

Nita A. Farahany: [00:36:35] And brain data, I believe, and many others too, that it’s uniquely sensitive. It’s the kind of behavioral information. It’s the information we haven’t shared. We haven’t hit a like button, but our brain lights up like a like button when we see something. And for companies and employers and governments to have access to that information and the closed loop environment that has been created, where not only do they have access to that information, but they can use it to shape the environment that we’re interacting with. And it can go dystopian incredibly quickly. And so, you know, what I really have been writing about and speaking about is this is technology that we shouldn’t be trying to ban. That’s not the answer, but that we have to steer in a way where we’re not afraid of the misuse of this. It doesn’t become the most Orwellian technology we’ve ever introduced to society, that we get the upside potential, and we mitigate against the downside risk by putting into place safeguards. Now, before it becomes an every pair of headphones you wear in every AirPod and every watch and every VR and AR device that you donned.

Jonathan Fields: [00:37:43] Yeah, I mean, it’s interesting also, right, because I would imagine we’ll get to a point fairly soon where a lot of the devices that we buy just for our own convenience, have this technology in it. Yeah, we didn’t buy it because of that. We may not be aware of the fact that it actually has this capability in it, because we’re not really using that for our own benefit. Right. But the sensors are there. You know, it’s detecting and decoding potentially in the background. And the question is like, are we okay with that? And are we, you know, like and what is even if we have no interest in that information, we’re not using it in a meaningful way. If that’s happening and then that’s being passed outside of our immediate ecosystem or device, you know, like, are we aware of that or are we okay of it? And I’m not aware of a lot of conversation happening around this.

Nita A. Farahany: [00:38:34] Yeah. No, I mean, I think that’s right. Now I think, you know, many of the companies will start by marketing much more directly to say, like it has these capabilities and that’s why you’re buying it. But if the next generation of AirPods is packed with sensors and some of those sensors or brain sensing sensors, and you can either choose to interact with that on your health app or not, but it’s still collecting the data all the same. Or, you know, you buy a virtual reality headset. And the way that you navigate, you know, through the game is by thinking about it, but you’re not thinking about what that actually means about how leaky your brain is. Now with respect to the data that can be gathered about it. You know, should we have much more explicit consent, should we have much more explicit notice to people? And should there be different ways that different categories of data are treated? Right? You might be fine. And a lot of my students, they you know, when I ask them how they feel about personalized algorithms that, you know, figure out what they like, for the most part, they seem to be fine with it. They’re like, okay, you have to collect a huge amount of data for me, but are you giving me products that I actually like, and am I seeing fewer that I don’t? Great. My feed is more, you know, specialized to me. I’m okay with that. Um, it’s what that data enables in ways that are frightening or problematic. And, you know, some of the examples I go into in the book, like in China, you know, there’s already educational classrooms where the students are required to wear brain sensing headsets that track their focus and attention.

Nita A. Farahany: [00:40:06] That data is given to the teachers, it’s given to the parents, it’s given to the states, and they’ve been punished for what their brain metrics reveal. And just, you know, imagine being an autocratic regime where you’re a child and your brain activity is being monitored. You know, the entire day that you’re in the classroom, what does that do to your ability to think freely to, you know, develop and grow in the way that a child should be able to develop and grow, or in the workplace where, you know, people are increasingly used to surveillance of, you know, their productivity tools on their laptops, but suddenly your brain sensing earbuds are also something that your employer has access to, and the kind of informational asymmetry that creates or the increased pressure that creates to stay, you know, focused and paying attention the entire day, even if that’s not the best thing for you or for the bottom line, or they start to see a decline in mental health. And what do they do with that information? Or if health insurance companies have access to that information, your car insurance company has access to the information, right? Like it’s not hard to start to see how this becomes dystopian pretty quickly. If there aren’t any limitations on the kinds of data each of these different entities can have access to. And nothing really prevents these companies from selling your data to all of those third parties. You know, there’s a couple of states in the US that have started to move in the direction of adopting laws that protect neural data, and there’s some international treaties and accords that are underway. But, you know, it’s largely the Wild West when it comes to brain data.

Jonathan Fields: [00:41:41] Again, it’s exciting and terrifying simultaneously. You know, it will enable so much. Um, and at the same time, it just opens so many questions. You know, the notion of, you know, the typical person, remember, um, hearing some data about, you know, like the types of thoughts that a typical, just normal, everyday person, well-adjusted, great life like. And yet that typical person is also going to have some pretty warped and pretty dark thoughts, like here and there and sometimes more here and there. They’ll never do anything about it. They’ll never act on it, you know. But. And it’s actually. And they may actually feel like, well, that’s kind of weird and dark, not realizing that actually the vast majority of people have those same types of thoughts, you know, like that, that passing and will never do anything about it. It’s just sort of like part of the human condition, you know? But if those thoughts register, you know, like if you’re hanging out wearing your smart glasses and just walking around. Those thoughts register and then that information can get passed on to other people. Like, does that then raise red flags with, you know, potential partners, potential employers, potential like somebody wants to, you know, is considering you as a student in a university. And they have, you know, they look at your, you know, your application and then like, they have their neural data that gets passed to them. Also to try and make sure that we have a, a safe class. On the one hand, yes, we all want safety and we want ease. We want the best. But at the same time, you know, if you start to just judge people on what are, quote, aberrant. But every day in very common thoughts that happen in the brain, it just yeah, it gets really spooky.

Nita A. Farahany: [00:43:21] Yeah. I mean, you know, you read so many interesting points within that, right from the misclassification of neuro atypicality. Right? I mean, all of us have thoughts that, you know, like a good example I like to give every now and then, you know, I just think like, okay, I’m going to strangle my husband, right? I don’t really think that. Right. I’m never going to actually act on that. But I don’t have always, you know, just kind and lovely thoughts about my husband every single day and every single second of every day. And you can just imagine for yourself the thoughts that you have, you know, somebody walks by, you have an unkind thought, like, whatever it is, we’re not always proud of every thought that pops into our head, but it’s our actions that we want to be judged on. Not every thought that pops into our heads. And yet we could quickly get to this world where we’re judging people based on their thoughts. And that’s not so unbelievable, right? If you already look like I talk about this in my chapter on your brain at work, you know, there’s already companies that are doing personality and cognitive and neural testing based on neuroscience for screening and for hiring candidates in the workplace. And, you know, the like, the theories that they’re built on are all based on trying to, you know, typify how your brain works. And you know that together with a device that’s put onto your head, or eye tracking data that’s making inferences about what’s happening in your brain as you’re answering those questions.

Nita A. Farahany: [00:44:42] This is all within the realm of what’s already happening. And so, you know, given how afraid we all are, I’m a parent. You know, I worry every day when my kids go to school with all of the school shootings, is it really so far fetched for us to think that they’re going to start to be increased screenings for people, for safety, and that we start to label people in particular ways? There’s a researcher by the name of Kent Kiehl kill for who for a long time has been studying psychopaths and has been characterizing their brain activity and came out with some really controversial findings. A few years back, where he showed that the differences he sees in brain scans in psychopaths who are in jail. He can start to see those differences as early as five years old. Now, he hasn’t followed those five year olds to adulthood to see if they end up in prison. But, you know, it’s not hard to imagine people taking that kind of research and applying it to say, okay, we have an incredibly competitive and selective process for this private school. You know, you have to submit this kind of data for us to see. Are you a danger to our classroom community.

Jonathan Fields: [00:45:47] Or are you the, quote type of student that would thrive in this environment, which brings in all sorts.

Nita A. Farahany: [00:45:53] Of opportunity for.

Jonathan Fields: [00:45:54] Bias and all sorts of other stuff? Yeah, well.

Nita A. Farahany: [00:45:56] That’s the thing, right? Is that it’s the coded bias that can happen in this space, right? It’s like nobody’s going to tell you that the reason that they didn’t bring you in or hire you, or that they fired you is because of what your brain metrics showed. Instead, they’re going to say you weren’t the right fit. You weren’t being as productive as you should be. We’re just having structured layoffs, like whatever it is, even though they’re using this data to make those kinds of decisions.

Jonathan Fields: [00:46:19] And then on the other side, you know, there may be some real value for somebody to actually be using data like this. Um, for both people. Like, maybe you actually like, figure out, like we actually legitimately are not a type of fit for the the type of opportunity that this is. And just like the way that your brain functions.

Nita A. Farahany: [00:46:35] Or optimistically, like there’s all kinds of brain wellness programs that companies have implemented. So instead of using it to penalize people or to invade on their mental privacy, you actually use it to provide more services that help people who are struggling with stress or struggling with mental health. Um, to be able to have the resources they need to get treatment and to be able to, you know, regain the kind of self-determination of their lives that they might want.

Jonathan Fields: [00:47:00] Yeah. I mean, I mean, maybe somebody like you’re able to actually pick up, you know, you have some young employee there who’s just pushing and working nonstop and they’re being told like, don’t do this. Like we’re we don’t expect this of you. But then the script in their brain is like, this is how I get ahead. Right. And you can start to actually detect maybe like this person is tipping towards mental illness or burnout or something like this, and we need to intervene because we need to help them because they’re not helping themselves.

Nita A. Farahany: [00:47:26] Well, I mean, in fact, I get into some of those use cases in the book where I talk about cognitive load and overload that already, you know, the future of a more positive workplace could be one in which you can actually detect cognitive overload, which has been attributed to not just stress and mental health, you know, concerns for the individual, but also safety, right? Their own safety and the safety of the others that they’re working around. And to be able to say like, oh, you are reaching cognitive overload. There’s a new company and a company that’s just launched their headphones that have EEG sensors, and one of the features that they’ve enabled in the app is to give you a signal when you’re reaching like a level of overload. When it’s time for you to take a brain break, both to be able to recover, but to get to levels of optimal productivity and to optimal levels of, you know, the right balance of stress versus, you know, kind of drive versus burnout. And, you know, I think those are really positive possible use cases of being able to give you the feedback that you need that you’re not getting internally from yourself about, hey, it’s time for a break. This is the best thing for you to actually achieve the goals that you want to achieve.

Jonathan Fields: [00:48:31] So where do we go with all of this? You know where um, there. There’s a really clear acceleration in the technology and what it’s able to do. Um, there are a lot of really interesting and fascinating benefits, maybe even life saving or life changing benefits. There are some real big concerns about what happens with all of the data that comes out of these. Um, what’s the way forward right now? Like if you look at the next three, five years, what would you want to see happen?

Nita A. Farahany: [00:48:59] So first and foremost, I think we have to put into place a recognition of basic rights for individuals to flip the narrative to empower people. And those are rights around what I call cognitive liberty, the right to self-determination over your brain and mental experiences. And from a human rights perspective, that looks like trying to secure to people a right to privacy, which would include a right to mental privacy, a right to self-determination, which gives you both a right to access and change your brain if you should choose to do so, and a right to freedom of thought to protect you against interception, manipulation and punishment of your thoughts. On the other side, we have to shore up the capacities for cognitive liberty, you know, being able to help people navigate an increasingly noisy world. So these are things like really starting to develop our interoceptive capabilities, our mental agility, and our relational intelligence so that we can navigate this world and be able to use the freedoms that are being protected by cognitive liberty, by having the capacities that we need to be able to navigate the world, to be able to think critically, to be able to think freely, and to engage with technologies in ways that are intentional and productive for us. So, you know, that looks like educational changes that we need to put into place. It looks like practices in our everyday lives to be able to create this better mind body connection that we’re losing because of the way technology is being developed and designed. And so I think there’s a lot that needs to happen at the individual level, and there’s a lot that needs to happen at the societal level to bridge the gap of where we are right now versus where we need to go.

Jonathan Fields: [00:50:28] Yeah. Are you bullish on the policy level bullish.

Nita A. Farahany: [00:50:33] Not at this moment in time. You know, I think that, um, you know I think the US is going into a period where we’re going to see is, um, you know, an experiment with much more laissez faire engagement with technology. Um, and you know, what we see as early indicators of how technology companies are reacting to that is really to push all of the obligations to individuals rather than providing any protections to them. And, you know, companies up until now. Have been monetizing all of our data and have done so without any kind of. Oversight. The EU is moving in a very different direction of trying to put into place much stricter safeguards, and that could end up serving as a floor that companies have to, um, you know, adopt a certain protections to be able to not have to navigate different markets incredibly differently. Um, but, you know, what I see is kind of an unrestricted race by technology companies without a lot of intervention by governments to try to put into place the right sticks or the right incentives to realign that technology with human flourishing. So I’m not that bullish in the exact moment that we’re talking. But, you know, I also feel like if I don’t maintain optimism, it’s hard for me to do the work that I do, which is to continue to advocate for the changes we need to put into place. The one thing that gives me a sliver of optimism is that there are few states and countries that have adopted specific provisions, like specific protections around brain and mental experiences, have done so in a bipartisan fashion. They seem to recognize the exceptional nature of being able to peer into and to change our brains. And so it gives me some optimism that at least in this one space, there is a concern like, imagine being a politician, being able to have all of your thoughts read that would be really bad for them, right? So they sort of get that there needs to be some protections in the space. And so that gives me a good sliver of hope to keep working on.

Jonathan Fields: [00:52:35] I mean, that makes so much sense. You know, it’s like if you see this technology and you’re like, this would help me do x, y or perform at this level, or like, they just live so much better, more comfortably. Um, but it also potentially exposes me. But I really want the benefit of that. Um, then there will be an incentive to say like, yeah, like I want to be able to access this, but I also need to be able to protect against the dark side. The downside here, um, it’s just such a fascinating moment.

Nita A. Farahany: [00:53:04] Yeah.

Jonathan Fields: [00:53:06] Um, it feels a good place for us to come full circle in our conversation as well. Um, so in this container of Good Life Project., if I offer up the phrase to live a good life, what comes up?

Nita A. Farahany: [00:53:17] It’s to live a life of purpose and meaning. Um, and I think, you know, increasingly, as I write in this space, I find that, um, it’s not a destination, right? It’s about the journey itself. And the more we can do to enable people on that journey to do so freely, to be able to do so with intentionality and to be able to do so with their full capacities, I think the more likely that we can live a good life individually and collectively.

Jonathan Fields: [00:53:49] Mm. Thank you. Before you leave, if you love this episode, safe bet, you’ll also love the conversation we had with Adam Grant about rethinking things. You’ll find a link to that episode in the show notes. This episode of Good Life Project was produced by executive producers Lindsey Fox and me, Jonathan Fields. Editing help by Troy Young. Kristoffer Carter crafted our theme music and special thanks to Shelley Adelle Bliss for her research on this episode. And of course, if you haven’t already done so, please go ahead and follow Good Life Project in your favorite listening app or on YouTube too. If you found this conversation interesting or valuable and inspiring, chances are you did, because you’re still listening here. Do me a personal favor. A seven-second favor. Share it with just one person. I mean, if you want to share it with more, that’s awesome too. But just one person, even then, invite them to talk with you about what you’ve both discovered to reconnect and explore ideas that really matter. Because that’s how we all come alive together. Until next time, I’m Jonathan Fields signing off for Good Life Project.

Don’t Miss Out!

Subscribe Today.

Apple Google Play Castbox Spotify RSS