Thoughts, Hacked: 20 Questions We Need to Ask Before Brain-Computer Interfaces Go Mainstream
Brain-Computer Interfaces (BCIs) are advancing from labs to startups, but their ethical roadmap is foggy. Let's crowdsource the right questions.
Quick Download ✨
This article was inspired by the recent quote from Xu Minpeng, a researcher at Tianjin University (TJU), who predicts that BCIs could eventually evolve into commercial, wearable devices serving as assistive technology in daily life (think Apple Watch, but integrated with your brain)
Brain-Computer Interfaces: Not Just Sci-Fi Anymore
This isn't speculation.
Brain-Computer Interfaces (BCIs) are evolving fast, faster than most people realize.
You've probably heard of Elon Musk's Neuralink, which made headlines by helping people with quadriplegia. Last week, Chinese researchers unveiled the first two-way BCI, enabling communication in both directions, in real time. We’re inching toward a future where tech doesn’t just read our thoughts, it responds to them.
Practical BCIs as mind-enhancement tools are still far off,
but it's time to debate their ethics.
The Rise of Thought-Driven Tech
BCIs are evolving from lab experiments to real-world products. AI-powered headphones from Neurable already track brainwaves to enhance focus. Everyone can buy such headphones on their homepage.
That’s today.
Tomorrow’s wearables could adjust interfaces based on our emotions or intentions.
As researcher Xu Minpeng explains:
Mutual learning allows machine systems to adjust according to user intentions, enhancing the system's flexibility and autonomy.
He predicts that, in the future, the system could could evolve into wearable BCI devices serving as assistive technology in daily life.
His vision is a world where we interact with technology as effortlessly as we think. This shift would move technology from hands-free to thought-driven.
I admit, this sounds incredible - Imagine systems that know what you want before you do. That plug limitless knowledge directly into our brains. But as a PM used to mapping out product risks, I find the many unanswered questions unsettling.
From Medical Innovation to Commercial Use
BCIs were originally for medical needs. Now, the industry is shifting from research to a thriving startup ecosystem, attracting funding and delivering real products.
We humans are really good at sprinting from ‘‘Can we do this’’, to ‘‘When can we launch?’’, without stopping to ask: ‘‘Should we? ‘‘.
Don’t get me wrong, I’m not saying that we shouldn’t. Some of the brightest minds are driving these startups, and their work has real potential to improve human life.
I trust that they’re thinking about ethics and want to do good.
I just wish they were more transparent about their ethics roadmap. Openly discussing challenges builds trust.
Non-Invasive ≠ Non-Intrusive
BCIs no longer require invasive surgeries or implants.
But 'non-invasive' only refers to how the BCI physically integrates with the brain. Tracking your thoughts is still invasive, just on a different level.
IF future BCIs can track emotions and thoughts, and our thoughts are no longer fully our own, how easily could they be manipulated?
As Yuval Noah Harari says:
Once you really solve a problem like direct brain-computer interface... that's the end of history, that's the end of biology as we know it. Nobody has a clue what will happen once you solve this.
Even beginning to imagine the consequences is difficult.
And to be clear, I don't have answers, just questions. Here are my top 20:
The 20 Questions
Will we be able to tell our own thoughts from those influenced by technology?
Could BCIs use AI to subtly shape our behaviours?
Could companies use real-time emotions to drive purchases?
What's the psychological cost of having our thoughts tracked?
Would we feel free to explore controversial ideas?
Will our thoughts become hackable?
How will relationships change if thoughts are monitored or shared?
How will individual identity evolve when our thoughts are shared or influenced?
Will free will still matter?
If neural data reveals early signs of conditions like dementia or depression, could people face discrimination from insurance companies before symptoms even appear?
What happens when a BCI misinterprets our thoughts?
Will BCIs eventually replace traditional education?
How will creative industries transform if BCIs enable instant skill acquisitions? What does it mean for talent?
Will BCIs have the ability to regulate emotions?
If so, could we lose the ability to process pain, grief, and other fundamental human experiences?
Could governments make BCIs mandatory for employment, education, or citizenship?
Will enhanced cognition be accessible to everyone, or only to the wealthy, dividing humanity into 'natural' and 'augmented' humans?
Who owns the data generated by our thoughts?
Could addiction to BCI-enhanced experiences lead people to withdraw from physical reality?
And most importantly:
Where will we draw the line? And whose responsibility is it to do so?
Join the Conversation
I believe in this tech’s potential, especially for medical applications). But I also believe in public dialogue.
We need more open conversations. What are your questions? Drop them in the comments. Let’s build this ethics roadmap together.
This newsletter is free, if you find it valuable, consider buying me a coffee! ☕
Questions From Readers
Will there be ways funnel our thoughts through encryption so advertisers can't use them? Like instead of a VPN, we'd have a VPNN (Virtual Private Neural Network)?
— Added by BioRat, from The Ethical Stalker
If BCIs eventually allow direct brain-to-brain communication, how might that redefine intimacy, relationships, and personal boundaries?
— Added by Karen Langston from Hey Lady, You Are Fascinating
Sources
Three companies to rival Neuralink in the BCI clinical trial landscape, Clinical Trials Arena
Nexus: A Brief History of Information Networks from the Stone Age to AI , Yuval N. Harari
Recommended Reads by Fellow Substackers
If you’re tired of product blogs that repeat the same advice, subscribe at karozieminski.substack.com for weekly insights on emerging product management and strategy, ethical AI implementation, and design & UX.
All illustrations are hand-drawn by the author in Procreate.
Curious about the creator? Visit the About page.
As someone who loves the concept and pretty much consider myself a borderline transhumanist, these are definitely questions we have to think about, especially from a privacy perspective. Who DOES own all of our thoughts?
Will there be ways funnel our thoughts through encryption so advertisers can't use them? Like instead of a VPN, we'd have a VPNN (Virtual Private Neural Network)?
I feel something like a VPNN will be very useful in these times because it would allow us to not have our thoughts read and not have advertisers continuously taking our thoughts for advertising. "Encrypted thoughts" is a wild concept to think about 😅.
Also, will Big Tech pretty much replace Big Pharma? I feel like anti-virus software is going to be an even bigger need than vaccines in those times. So, does anti-virus become a part of healthcare? Will updates be free?
Best thing about that is, I feel if it does become part of everyday healthcare, and it becomes too expensive, people will turn to open-source ways to keep from being hacked.
Sorry for the long response, lol. I love stuff like his :D. So many things to think about.
Amazing, BioRat! That’s exactly what I hoped this post would achieve. I wouldn’t have come up with the VPNN concept on my own :) The open-sourcing angle is another fascinating path to imagine. If it does evolve that way, what could it lead to? As you wrote, there're sooooo many things to consider :)