top of page
Writer's pictureShidonna Raven

Your Brain Has Been Hacked

About the future computer chip in your brain and what Elon Musk’s Neuralink has to do with it

Image for post

Photo by Uriel SC on Unsplash Shidonna Raven Garden and Cook

Would you like to have a chip inside your brain? One that could increase your capacity to think, feel, and handle situations? If so, you don’t have to wait too much longer: Scientists have made significant breakthroughs in developing brain-computer interfaces. Would you sign up for a brain chip?

This August, Elon Musk presented a new iteration of the Neuralink brain implant. The goal is to give human brains a direct interface to digital devices, helping, for instance, paralyzed humans, allowing them to control phones or computers. The chip would pick up on signals in the brain and then translate them into motor controls. Neuralink’s technology is quite stunning. This tiny brain implant has more than 1,000 electrodes and will possibly one day allow a person to transmit neuroelectrical activity to anything digital.

This new technology is not exactly new. However, it surpasses current chips neuroscientists use as a standard today: The chip they use has 64 electrodes. While many in the field imagine using these neural interfaces to control a prosthetic limb or cure paralyzed people, Musk describes the overall project as aiming to “achieve symbiosis with artificial intelligence.”

What is the premise here precisely? The mindset behind Musk’s interest and Neuralink’s mission goes way beyond finding a cure for paralyzed people and is best described by Arielle Pardes in Wired: “Machines with artificial intelligence are outpacing humankind. Ergo, implant computer chips in human brains to level up the species.”

The Frightening and Exciting Possibilities of Brain Chips

The goal is to merge brains and computers deeply through a brain chip that sends and receives information. One of the challenges is how the chip can fully function over a long time inside the brain.

The mammalian brain is an unfriendly environment for anything that is not a brain. Imagine it as a massive knot of wires that corrodes most metals over time. As with all organs and human tissue, the brain also fights off intruders and has mechanisms to protect its electrodes and cells. The protector of neurons in the brain is called glia. In addition to supporting and protecting neurons, these non-neuronal cells maintain homeostasis and form myelin. For the Neuralink device, this means that over time gliosis kills the electrode’s ability to record. Therefore, Neuralink scientists will have to find materials that won’t set off the glia cells to go nuclear on the electrode and won’t break down over time. If they cannot find these materials, chances are that a patient with a Neuralink device will have to have the device removed sooner than later.

Another challenge is how easily the implant is implemented and removed. One of the main selling points of Neuralink is that it is an easy-to-implant, non-damaging, long-lived cybernetic implant. Neuroscientists aren’t exactly sure how this implant could be implemented without damaging blood vessels and how it could remain in the brain over an extended period without doing so.

However, for the sake of argument, let’s say scientists and engineers overcome all of these obstacles and can implant a perfect brain chip inside your brain. Think of the frightening and the exciting possibilities, the two sides to this endeavor, as described by Jeff Stibel:

“The ability to communicate with others via thought, for example, is exciting, but giving others the ability to read your mind is frightening. Controlling a light switch or driving a car with one’s mind is exciting; the potential of others controlling your mind is frightening. It might be cool to have a perfect memory, but it would be terrifying if your memory could be hacked.”

The way we remember information is linked to our memory. Let’s take the self-reference effect. We have a tendency to encode information differently depending on whether the information given is implicating us in some way. I will always remember the name of your aunt Sarah — obviously, because I am a Sarah myself. What if my memory was altered? How would I process the information given to me now?

What I mean by to process is the conclusions I’d draw based on an altered memory. While it is relatively easy to detect and measure signals from the neuron, extracting meaning from the measurements is entirely different. How do these measurements and data derived correlate with human dreams, hopes, memories, and thoughts? Or as Adam Rogers put it: “The electrical activity of the brain happens while you are thinking or remembering, but it may not be what you are thinking or remembering. Just being able to sense and record that activity isn’t recording actual thought. It correlates, but may not cause.”

Furthermore, there is no consistent theory of consciousness. Which brings me to the heart of artificial intelligence research: We’re not sure what to aim for. We can’t fully explain what intelligence is and therefore lack a comprehensive model for algorithms to follow.

AI Doomsday and the Chip in My Head

Scientists determine whether something is possible (remember, the technology helping people control prosthetic limbs with their minds already exists). It is an engineer’s job to figure out how those ideas become a reality. But as so often, the question of whether we should create this technology takes a back seat. Which, I argue, is wrong.

As history unfolds, we as a world community do not determine the direction in which technology should develop: A few powerful corporations do. This means not only deciding which technology is worth developing but also what the goal of the development will be. But it cannot be in everyone’s interest that only a few decide what this goal should look like.

Let’s think about this future chip in our heads: The real driving force behind this project lies in one man’s fear of AI getting cognizant and looking to obliterate us. It may well be that his fear is well-founded. But what if he is wrong?

My fear is a different one: I fear corrupt individuals will utilize AI to do evil. Maybe my fear has already come to fruition.

Do you think these new technolgies like she and Elon Musk can be used for evil? What should be done? Have you spoken with your elected officials?

If these articles have been helpful to you and yours, give a donation to Shidonna Raven Garden and Cook Ezine today. All Rights Reserved – Shidonna Raven (c) 2025 – Garden & Cook.

0 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page