Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


In the coming years, consumers will spend a significant portion of their lives in virtual and augmented worlds. This migration into the metaverse could be a magical transformation, expanding what it means to be human. Or it could be a deeply oppressive turn that gives corporations unprecedented control over humanity. 

I don’t make this warning lightly. 

I’ve been a champion of virtual and augmented reality for over 30 years, starting as a researcher at Stanford, NASA and the United States Air Force and founding a number of VR and AR companies. Having survived multiple hype cycles, I believe we’re finally here — the metaverse will happen and will significantly impact society over the next five years. Unfortunately, the lack of regulatory protections has me deeply concerned. 

That’s because metaverse providers will have unprecedented power to profile and influence their users. While consumers are aware that social media platforms track where they click and who their friends are, metaverse platforms (virtual and augmented) will have much deeper capabilities, monitoring where users go, what they do, who they’re with, what they look at and even how long their gaze lingers.  Platforms will also be able to track user posture, gait, facial expressions, vocal inflections and vital signs. 

Invasive monitoring is a privacy concern, but the dangers expand greatly when we consider that targeted advertising in the metaverse will transition from flat media to immersive experiences that will soon become indistinguishable from authentic encounters.  

For these reasons, it’s important for policymakers to consider the extreme power that metaverse platforms could wield over society and work towards guaranteeing a set of basic “immersive rights.” Many safeguards are needed, but as a starting point I propose the following three fundamental protections: 

1. The right to experiential authenticity

Promotional content pervades the physical and digital worlds, but most adults can easily identify advertisements. This allows individuals to view the material in the proper context — as paid messaging — and bring healthy skepticism when considering the information. In the metaverse, advertisers could subvert our ability to contextualize messaging by subtly altering the world around us, injecting targeted promotional experiences that are indistinguishable from authentic encounters. 

For example, imagine walking down the street in a virtual or augmented world. You notice a parked car you’ve never seen before. As you pass, you overhear the owner telling a friend how much they love the car, a notion that subtly influences your thinking consciously or subconsciously. What you don’t realize is that the encounter was entirely promotional, placed there so you’d see the car and hear the interaction. It was also targeted — only you saw the exchange, chosen based on your profile and customized for maximum impact, from the color of the car to the gender, voice and clothing of the virtual spokespeople used. 

While this type of covert advertising might seem benign, merely influencing opinions about a new car, the same tools and techniques could be used to drive political propaganda, misinformation and outright lies. To protect consumers, immersive tactics such as Virtual Product Placements and Virtual Spokespeople should be regulated.  

At the least, regulations should protect the basic right to authentic immersive experiences. This could be achieved by requiring that promotional artifacts and promotional people be visually and audibly distinct in an overt way, enabling users to perceive them in the proper context. This would protect consumers from mistaking promotionally altered experiences as authentic.

2. The right to emotional privacy

We humans evolved the ability to express emotions on our faces and in our voices, posture and gestures. It’s a basic form of communication that supplements verbal language. Recently, machine learning has enabled software to identify human emotions in real time from faces, voices and posture and from vital signs such as respiration rate, heart rate and blood pressure.While this enables computers to engage in non-verbal language with humans, it can easily cross the line into predatory violations of privacy. 

That’s because computers can detect emotions from cues that are not perceptible to humans. For example, a human observer cannot easily detect heart rate, respiration rate and blood pressure, which means those cues can reveal emotions that the observed individual did not intend to convey. Computers can also detect “micro-expressions” on faces, expressions that are too brief or subtle for humans to perceive, again revealing emotions that the observed had not intended. Computers can even detect emotions from subtle blood flow patterns in human faces that people cannot see, again revealing emotions that were not intended to be expressed.

Image by Louis Rosenberg using Midjourney

At a minimum, consumers should have the right not to be emotionally assessed at levels that exceed human abilities. This means not allowing vital signs and micro-expressions to be used. In addition, regulators should consider a ban on emotional analysis for promotional purposes. Personally, I don’t want to be targeted by an AI-driven conversational agent that adjusts its promotional tactics based on emotions determined by my blood pressure and respiration rate, both of which can now be tracked by consumer level technologies.  

3. The right to behavioral privacy

In both virtual and augmented worlds, tracking location, posture, gait and line-of-sight is necessary to simulate immersive experiences. While this is extensive information, the data is only needed in real time. There is no need to store this information for extended periods. This is important because stored behavioral data can be used to create detailed behavioral profiles that document the daily actions of users in extreme granularity.

With machine learning, this data can be used to predict how individuals will act and react in a wide range of circumstances during their daily life. And because platforms will have the ability to alter environments for persuasive purposes, predictive algorithms could be used by paying sponsors to preemptively manipulate user behaviors.

For these reasons, policymakers should consider banning the storage of immersive data over time, thereby preventing platforms from generating behavioral profiles. In addition, metaverse platforms should not be allowed to correlate emotional data with behavioral data, as that would allow them to impart promotionally altered experiences that don’t just influence what users do in immersive worlds but skillfully manipulate how they feel while doing it. 

Immersive rights are necessary and urgent

The metaverse is coming. While many of the impacts will be positive, we must protect consumers against the dangers with basic immersive rights. Policymakers should consider guaranteeing basic rights in immersive worlds. At a minimum, everyone should have the right to trust the authenticity of their experiences without worrying that third parties are promotionally altering their surroundings without their knowledge and consent. Without such basic regulation, the metaverse may not be a safe or trusted place for anyone. 

Whether you’re looking forward to the metaverse or not, it could be the most significant change in how society interacts with information since the invention of the internet. We cannot wait until the industry matures to put guardrails in place. Waiting too long could make it impossible to undo the problems, for they’ll be built into the core business practices of major platforms. 

For those interested in a safe metaverse, I point you towards an international community effort in December 2022 called Metaverse Safety Week. I sincerely hope this becomes an annual tradition and that people around the world focus on making our immersive future safe and magical. 

Louis Rosenberg, PhD is an early pioneer in the fields of virtual and augmented reality. His work began over 30 years ago in labs at Stanford and NASA. In 1992 he developed the first immersive augmented reality system at Air Force Research Laboratory. In 1993 he founded the early VR company Immersion Corporation (public on Nasdaq). In 2004 he founded the early AR company Outland Research. He earned his PhD from Stanford University, has been awarded over 300 patents for VR, AR, and AI technologies and was a professor at California State University.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers