top of page
Search
  • Writer's pictureEmpa

Accept All or Only Necessary Cookies in your Brain?

Updated: Dec 7, 2022

Neuralink Corporation was founded in July 2016 by

Elon Musk, Max Hodak and Paul Merolla. The company has since then attracted plenty of media attention. Quite paradoxically, still in 2022, plenty of the company’s activity is still in the dark. According to Neuralink themselves, they’re developing electrodes called probes which could be implanted into the brain.

The aim is to be “symbiotic with AI”.

Musk claims that we’re already using human-machine interfaces with our smartphones. He also insists that the effectiveness of machine interfaces could be vastly improved using direct brain-machine interfaces, as we’re currently delaying the output from our brain as we must type with our fingers to produce output on computers or smartphones. It all makes sense, but these are Elon Musk’s own words, and since he has over $100 million personally invested in the corporation, he certainly has reason not to disclose any ethical hiccups or risks.


One doesn’t have to study the emergence of technologies for long before Elon Musk starts coming off as a stereotype. Being the founder and CEO of multiple technological companies, including Neuralink, SpaceX, Tesla and OpenAI, he is a master of building hype, technological optimism and grow public interest for his businesses. This attraction of attention is most likely a conscious part of Musk’s business model, as there are links between public hype and investment. Nonetheless, with investment comes expectations. This was analysed by Van Lente in his promise-requirement cycle. The idea is that a promise of a new technology, much like the promises of what Neuralink can achieve once developed, will attract attention and investment, which consequently will set external expectations. Withal, Van Lente also explains that emerging technologies rarely accomplish all they set out to do.

And according to the theory of Solutionism, technological fixes more often than not creates issues of their own and will then require new technologies to continue operating and developing.

For instance, if Neuralink comes into play in a few years time, it will dramatically increase the demand for computational devices compatible with the probes. Hence, the development of technologies is connected to the increased need for new technologies. Then, entrepreneurs will make new promises for these new technologies, and the promise-requirement cycle loops.


Currently, Neuralink is held back by ethical animal testing regulations and FDA approval for human testing. Is this a reasonable framework for a corporation doing work in an ethical grey area? It may be argued that Neuralink is a company like any other, and that innovation should not be haltered by excessive regulation. However, if Musk achieves what he claims his new technologies are aiming to achieve, humans’ way of life may be altered at extents greater than the internet revolution.

Sheila Jasanoff, pictured by Martha Stewart

As researcher Sheila Jasanoff says: “(…) technology, once it escapes from the closed worlds of labs and field tests, becomes to some extent everyone’s property.”. Jasanoff brings up an excellent point from The Ethics of Invention, highlighting how new technologies have a way of nestling into unpredictable areas of our lives, which draws frightening yet intriguing parallels to Ray Kurzweil’s ideas in The Singularity is near, on how human-machine interfaces may lead to mental human immortality. In contrast, it’s possible that Neuralink will become a victim of the Novelty Trap. The trap entails that inventors claim that their products are revolutionary, which helps build hype and public interest for their innovations, but once regulatory agencies steps in with concerns about these alleged groundbreaking technologies, the innovators instead claim that the products are not so different after all. However, Neuralink is less likely to fall into this trap than other technologies, as the development of the probes is a new research area. The corporation is instead more likely to encounter problems due to increased regulations as a response to the power their innovation may hold.


Ulrich Beck first described the Risk Society in 1992. He built his theories on the idea that mankind have created a world where we are subject to our own creations, and that they have become our biggest threat. Ever since the beginning of the 20th century, risk in society has shifted from being external threats to humanity from nature, to now being internal risks within humanity, or as Beck states: “Risk society begins where nature ends”. Beck’s thesis is valid and raises important questions to be addressed in the 21st century. Although it is important to be aware of its simplifications. In the developed world, we may see technological risk as the main ones, but in other parts of the world natural risks, diseases and parasites are still prominent risks to account for. However, Neuralink still poses a risk for society, as much as it is an opportunity. As Voltaire said, “With great power comes great responsibility”,

can citizens trust entrepreneurs like Elon Musk to honour this responsibility in a capitalised world, hungry for technological development and with the endless financial gain of big data corporations?

Due to these factors, Neuralink should perhaps be considered to fall under higher regulatory demands than other technical devices. For instance, what happens to the Cookies tracking algorithm once your Neuralink probes is connected to your smartphone and internet searches? How much information will be accessible to the big data companies such as Google and Apple, not to mention Neuralink themselves?

Will Neuralink become the new big data company, made up not of our internet searches and social media likes, but of our thoughts and ideas?

This also includes the issue of mind-control, which at first glance may sound like science fiction, but what if the Neuralink probes are susceptible to hacking? This could potentially have catastrophic effects, for example if a person with the American nuclear codes got their probes hacked. Manipulations of the Neuralink probes could also have detrimental effects on privacy. For instance, a company could in theory gain control of their target groups’ probes, and insert the idea that the consumers wanted to buy something.


There are many concerns on how the input to and output from the Neuralink probes would be controlled in the cyberspace. And until there is a policy framework, is it reasonable to let Neuralink keep their activity in the dark? Neuralink may unlock a reality where the internet knows more about us than we do ourselves. What would that mean for social interaction? What information would employers or university application offices demand? Neuralink may open doors Elon Musk never intended to open. Similarly to how Mark Zuckerberg probably never could have imagined that the social media platform he developed from his college dorm would turn into one of the worlds biggest marketing firms. However, this principle of unintended consequences for technologies, is a myth. The unintended consequences are not due to the technology taking control and humans losing control, they are due to humans interacting with the technologies. However, although the unintended consequences of Neuralink may come from human control, can society in its current form handle these changes?

The truth may very well be that it will be difficult to stop the development of Neuralink if it ever reaches the market. Not because of the myth of technological inevitability, but because of the Collingridge dilemma. The dilemma means that we won't have enough information regarding how the Neuralink probes will affect mankind, and how hard they will be to control until they’re integrated in the digital world of the 21st century. Further, if Musk managed to avoid the trap of novelty, Neuralink will most likely dramatically increase efficiency in our fast-paced world. However I hope that strict regulations regarding data collection and privacy will follow in the steps of Neuralink. Because I can’t even imagine the capitalistic possibilities that may come from Neuralink data, but I do know that GDPR definitely no longer will be enough.


55 views0 comments

Commentaires


bottom of page