飞机盗号系统API|【唯一TG:@heimifeng8】|飞机盗号软件API破解✨谷歌搜索留痕排名,史上最强SEO技术,20年谷歌SEO经验大佬✨Character.AI unveils AvatarFX, an AI video model to create lifelike chatbots

Character.AI unveils AvatarFX,飞机盗号系统API an AI video model to create lifelike chatbotsAmanda Silberling

Character.AI, a leading platform for chatting and roleplaying with AI-generated characters, unveiled its forthcoming video generation model, AvatarFX, on Tuesday. Available in closed beta, the model animates the platform’s characters in a variety of styles and voices, from human-like characters to 2D animal cartoons.

AvatarFX distinguishes itself from competitors like OpenAI’s Sora because it isn’t solely a text-to-video generator. Users can also generate videos from preexisting images, allowing users to animate photos of real people.

飞机盗号系统API|【唯一TG:@heimifeng8】|飞机盗号软件API破解✨谷歌搜索留痕排名,史上最强SEO技术,20年谷歌SEO经验大佬✨Character.AI unveils AvatarFX, an AI video model to create lifelike chatbots

It’s immediately evident how this kind of tech could be leveraged for abuse — users could upload photos of celebrities or people they know in real life and create realistic-looking videos in which they do or say something incriminating. The technology to create convincing deepfakes already exists, but incorporating it into popular consumer products like Character.AI only exacerbates the potential for it to be used irresponsibly.

Character.AI told TechCrunch that it will apply watermarks to videos generated with AvatarFX to make it clearer that the footage isn’t real. The company added that its AI will block the generation of videos of minors, and that images of real people get filtered through the AI to change the subject into a less recognizable person. The AI is also trained to recognize images of high-profile celebrities and politicians to limit the potential for abuse.

Since AvatarFX is not widely available yet, there is no way to verify how well these safeguards work.

Character.AI is already facing issues with safety on its platform. Parents have filed lawsuits against the company, alleging that its chatbots encouraged their children to self-harm, to kill themselves, or to kill their parents.

In one case, a 14-year-old boy died by suicide after he reportedly developed an obsessive relationship with an AI bot on Character.AI based on a “Game of Thrones” character. Shortly before his death, he’d opened up to the AI about having thoughts of suicide, and the AI encouraged him to follow through on the act, according to court filings.

These are extreme examples, but they go to show how people can be emotionally manipulated by AI chatbots through text messages alone. With the incorporation of video, the relationships that people have with these characters could feel even more realistic.

Character.AI has responded to the allegations against it by building parental controls and additional safeguards, but as with any app, controls are only effective when they’re actually used. Oftentimes, kids use tech in ways that their parents don’t know about.

Updated, 4/23/25, 9:45 AM ET with comment from Character.AI

Arts
Previous:德保矮马出场费30万元 走出保种场迈进骑术界
next:微信攒局 巴尔虎草原上的“微信冬季赛马会”