Imagine your face appearing on adult websites without your knowledge, or being used for scams. In such scenarios, reporting to the police would likely be your immediate concern. But what if someone approached you directly, offering to pay for your likeness?
Scott, a minor foreign actor, recently discovered his image frequently appearing on TikTok, promoting insurance, endorsing apps, and even speaking Spanish—a language he doesn’t know. “He” looked like him, but it wasn’t him.
These advertisements were obviously not the result of Scott sleepwalking and filming them. The situation dates back to a year ago.
A year prior, TikTok launched an advertising tool called Symphony Digital Avatars, which utilized AI-generated digital humans for advertisements. While this offered a convenient solution for advertisers, TikTok also provided a library of pre-licensed digital avatars for their clients.
Scott unfortunately found himself among these options. According to The New York Times, TikTok acquired a year’s worth of Scott’s portrait rights for $750 and a free trip. The specifics of the contract remain unclear, but at the time, Scott likely saw it as a favorable deal: monetizing his image, receiving a complimentary trip, and potentially leveraging TikTok’s platform traffic. It seemed like a win-win situation.
However, his perception of the situation quickly changed. Scott was devastated to see his likeness used in a series of bizarre advertisements. One might consider it fortunate that the clients weren’t promoting hemorrhoid cream or male enhancement products, which would have made the situation even more embarrassing.
Scott’s situation, however, is not the most extreme. Two other actors who licensed their image rights to TikTok received only a one-time payment of $500-$1000 for a year’s usage. This is considered quite low in the American acting industry, where a single commercial can typically earn between $300-$1000, and sometimes even $2500. In contrast, the annual income of over $5,000 in China would appear insignificant when compared to the widely reported daily earnings of 2.08 million RMB for some individuals in the entertainment sector.
Indeed, numerous companies, much like TikTok, are actively investing in “buying faces” legally. Synthesia, a UK-based AI digital human company, has clients whose digital avatars are based on real individuals who have granted explicit authorization. These collaborations, whether driven by novelty or financial opportunism, have led many, like Scott, to willingly license their likeness to AI companies with compensation ranging from $1000 to £1500, generally modest sums.
Some might argue that simply using one’s image without requiring physical presence for filming justifies such compensation. However, the issue often transcends the monetary aspect, as many who have “sold their faces” later express significant regret.
The implications of these transactions extend far beyond the mere licensing of an image. A reputable actor could find their digital counterpart promoting unproven weight-loss remedies or advocating dubious health practices, essentially becoming a “barefoot doctor.” In a more alarming instance, a British model who signed a three-year contract worth $5240 discovered her digital avatar endorsing a president who had come to power through a coup. This illustrates the profound concern: digital avatars can be made to appear anywhere, say anything, and do anything—actions of which the original individual remains unaware and has no control over. This is indeed a frightening prospect.
While one might request AI companies to restrict and review generated content, it’s crucial to acknowledge that not all risks can be foreseen by the platform. Explicitly violent or extreme content may be easily flagged, but subtler forms of problematic material, such as pseudoscientific health advice, divisive rhetoric disguised as reasoned arguments, or misleading product endorsements, are far more challenging to identify and verify. Furthermore, individual ethical boundaries vary widely; content that may not violate legal statutes or platform rules could still be unacceptable to many. Imagine using a vegan’s digital avatar to promote KFC’s popular chicken products – the backlash would be considerable.
Moreover, AI companies prioritize their paying clients, not the individuals whose likenesses are used. It is important to clarify that this discussion does not negate the commercial value of AI digital humans but aims to highlight the personal risks associated with this burgeoning value. An example might offer a fresh perspective: Have you heard of the “Cyber Nymph Naine”?
Two years ago, a University of California student trained a Lora model named Naine using over three hundred close-up selfies. This model was subsequently made open-source on Civitai, allowing free use for training purposes provided it was not for commercial gain. As a result, many users were able to generate explicit content featuring Naine. In a sense, she might have anticipated her model being used for adult content, thereby retaining a degree of expected control over her digital persona.
However, Scott and others in similar situations lacked such foresight. Therefore, the development of AI digital humans should proceed, but only with clearly delineated risks and responsibilities. This includes not only addressing issues of content review and revenue sharing but also establishing robust contractual agreements that explicitly define usage scope, duration, and other critical parameters. A “take the money and run” approach, where one party profits while the other faces unforeseen consequences, is unsustainable and ethically questionable. In more unfortunate scenarios, your image might be used for training without your knowledge, leaving you uncertain about who to seek recourse from.
