I don’t know about you, but as someone obsessed with tech who also loves all things spooky, one of my favorite shows to watch come Halloween time is Black Mirror. Besides being an undeniably brilliant show, the worst-case scenarios of permeative technology it presents are genuinely frightening. Although it may be a fictional show, it roots itself in some real-life fears about ways technology could overtake our lives and, possibly, the world.
Some technological developments of late seem like something only the writers of Black Mirror could dream up. Let’s delve into five examples of the digital trends and innovations most likely to send shivers down your spine:
1. Audio spotlight systems
Picture this: You’re walking down the sidewalk, passing a bevy of storefronts, when you hear a voice that sounds like it’s right behind you beckoning you to come inside and check out the sale at the home appliance store. You whip around, wondering who delivered this information to you, but you see nothing. Have you gone crazy?
Probably not. Holosonics, a Massachusetts-based audio technology company, has introduced something they call an “audio spotlight” using their trademarked PrivateSound technology, which they claim provides “all of the sound and none of the noise.” With just a thin speaker panel, retailers, museums, exhibits and other vendors can target a message at whomever they want to hear it, much the same way you would shine a flashlight.
How is this possible? According to Holosonics, “Sound generally propagates omnidirectionally. Only by creating a sound source much larger than the wavelengths it produces can a narrow beam be created.” Essentially, they generate wavelengths that are much smaller than their source. As sound travels from the speakers, the air around the sound stream distorts the audio to make it audible. So next time you’re out shopping and think you’re hearing voices encouraging you to make a purchase, don’t freak out. You might have just fallen prey to an invisible audio spotlight.
2. Social media bots
In the past few years, we’ve witnessed the devastating effects bots can have on the thoughts and opinions of the public. All too often, people take the influence of these bots with them to the voting polls.
These sinister bots have become master manipulators, and it’s easier than you might think to be tricked. Anyone who spends time online, particularly on social media platforms, has undoubtedly seen examples of the propaganda spewed by bots. The bots attempt to use conspiracy theories, fabricated images and other falsehoods to evoke emotion and, in particular, influence political leanings. Their strategy of starting hashtags, sharing links and spamming content with comments is incredibly effective and persuasive and, as a result, incredibly dangerous.
If we’re not careful, we could lose the autonomy of our own thoughts to bots. By reducing the amount of time we spend online and fact-checking every single news article, video, photograph and piece of information we come across, we can begin to unravel the hold that these bots have gained over our minds and feelings.
Cyber attacks have not decreased since they first began making headlines; rather, they have only increased in prowess and sophistication. This means you have to tread even more lightly when traversing the wild world of the internet.
Ransomware is exactly what it sounds like: It’s a type of malware wherein hackers worm their way into a system and hold the data contained in that system for ransom, promising not to hand it over until their demands are met. But who’s to say a hacker is the kind of person to make good on his or her word?
The demands made in these attacks are most often for payment either by cryptocurrency or credit card. Last year, a particularly menacing attack incapacitated hospitals across the United Kingdom as well as 45,000 computers across 74 countries.
How can you avoid becoming the victim of one of a ransomware attack? The most practical step you can take is to always verify the sender of any emails you open before opening any attachments. The perpetrators like to pose as trusted, familiar companies to get you to step into their trap so they can lock you out of your device and steal your data. If an email address looks suspicious, do an online search. Conduct the proper research to see if this is just another standard email from your cable company … or something far more malicious.
4. AI dreams
By examining the ability of their servers to recognize and create images from imagery they’ve previously processed, Google has come to the realization that yes, artificial intelligence (AI) is capable of “dreaming.” However, their dreams don’t look quite like the dreams of the lifeforms they’re attempting to imitate.
Google engineers teach the artificial neural networks of their computers to recognize images by “feeding” them images, one layer at a time. The initial layer of mock neurons will examine an image, pass along the information it absorbed to the next layer, and so on, multiple times over, until the computer has processed the image. But to thoroughly test the retention of this information, engineers then reverse the process by giving the AI an object and asking it to create an image of it.
Because this AI holds onto the identifying features of an object and not so much its other traits, like color or size, these visualizations can get pretty surreal. Now, Google is studying what AI will come up with, unprompted, when provided with a blank canvas. Imagine lots of mutated hybrid animals and images that look like something straight out of a Salvador Dali painting. Yeah. Pure nightmare fuel.
5. Virtual influencers
Human influencers have become a major boon to companies and brands in the age of social media, but they’re incredibly fallible (see here for just one of many, many examples). So how can businesses achieve the same kind of scope without the risk of becoming associated with racism or crime?
Enter virtual influencers. They act just like regular influencers: They operate Instagram accounts, take pictures with their “friends” and even create slightly cringeworthy music. The one notable difference is that, because they’re not actually human, they’re not susceptible to making mistakes that could ruin their entire career.
Take, for instance, Shudu Gram and Miquela Sousa, a digital supermodel and a virtual influencer, respectively. They have a combined following of more than 1.5 million on Instagram, despite not actually being human. “Lil Miquela” has a song on Spotify and promotes luxury brands like Chanel, and Shudu uses her feed to show off products like makeup from Fenty Beauty, Rihanna’s line of cosmetics. Sure, they look a little uncanny valley, but that’s likely to improve to the point where it will be difficult to distinguish between them and a real, flesh-and-blood influencer.
What are your thoughts? Are the hairs on the back of your neck standing up yet? Do you have any other examples of tech that might be even more unsettling? Sound off in the comments below!