Chief Technology Officer (CTO) of Meta (formerly Facebook) Andrew Bosworth acknowledged that moderating how users speak and behave in Metaverse, “at any meaningful scale is practically impossible.” Although the CTO wants Metaverse to have “almost Disney levels of safety,” reported the Financial Times.
Bosworth warned that virtual reality can often be a “toxic environment” in particular for women and minorities, in an internal memo from March seen by the Financial Times. The CTO believes that harassment in virtual reality is an “existential threat” to the company.
He reportedly advised in the memo that Facebook should use its current community rules but should have “a stronger bias towards enforcement along with some sort of spectrum of warning, successively longer suspensions, and ultimately expulsion from multi-user spaces”.
“The theory here has to be that we can move the culture so that in the long term we aren’t actually having to take those enforcement actions too often,” he added.
Meanwhile, Meta is developing a virtual reality social game called Horizon Worlds which is designed to constantly record what is happening in the metaverse and will be stored on Meta’s Oculus headset— which will send data to Meta if users choose to help moderate the platform.
While Bosworth, pointed out ways in which the company can try to tackle the issue, but experts told The Verge that monitoring billions of interactions in real-time will require significant effort and may not even be possible.
Meta has also pledged $50 million for research into practical and ethical issues around its metaverse plans. Facebook’s AI has been criticised numerous times for failing to tackle hate speech on the platform. As FT notes, the company’s recent rebranding offers a potential fresh start, but as the memo notes, VR and virtual worlds will likely face an entirely new set of problems on top of existing issues.