Imagine that messages of some friends in the Messenger Facebook app or whatsapp and you are receiving an unwanted message from an ATBOT AI obsessed with movies.
“I hope you have a harmonious day!” He writes. “I wanted to check and see if you recently discovered some new favorite soundtracks or composers. Or maybe you would like some recommendations for the next night of your movie? Let me know and will I be happy to help!”
This is a real example of what a AI persona sample can send “The Maestro of Movie Magic” as a preventive message to Messenger, WhatsApp or Instagram, per guidelines from the Aligner Data labeling company Shown by business confidential.
The exit learned through documents that Meta works with Alignerr to train customizable chatbots to reach users without being and following any previous conversations. This means that the bots, which users can create on the Meta AI studio, also remember users.
Meta confirmed that he is testing AIS tracking messages at TechCrunch.
AI Chatbots will only send tracking within 14 days after a user starts and if the user has sent at least five messages to the bot within this time frame. Meta says chatbots will not maintain messages if there is no answer to the first follow -up. Users can keep their bots private or share them through stories, direct links and even display them on a Facebook or Instagram profile.
“This allows you to continue exploring issues of interest and participate in more important talks with AIS in our applications,” a Meta spokesman said.
Technology is similar to that offered by the newly established AI companies such as character.Ai and Replika. Both companies allow their chatbots to start chats and ask questions to act as AI companions. The new chief executive of character.ai, Karandeep Anand, joined the team last month after serving as Vice President of Meta business products.
But with the commitment there is a danger. The character.ai is actively aligned following allegations that one of the company’s bots played a role in the death of a 14 -year -old boy.
When asked how Meta plans to deal with security to avoid situations like character. Order of dismissal. One of them warns that the answer of an AI “may be inaccurate or inappropriate and should not be used to make important decisions”. Another one says that AIS is not authorized professionals or experts trained to help people.
“Talks with Custom AIS cannot replace professional advice. You should not rely on AI conversations on medical, psychological, economic, legal or any other kind of professional advice.”
TechCrunch also asked Meta if it imposes an age limit for chatbots. A brief internet dive comes with no age restrictions imposed by the company for the use of Meta AI, although laws in Tennessee and Puerto Rico Limit teenagers from some commitment.
On the surface, the mission is aligned with the search for Mark Zuckerberg to fight the “epidemic of loneliness”. However, most of the META businesses are based on advertising revenue and the company has gathered the reputation for the use of algorithms to keep people move, comment and love, which is associated with more eyes on ads.
In court documents detached in April, Meta predicted that AI’s genetic products would secure it $ 2 billion to $ 3 billion in models’ revenue. The company said the AI assistant could eventually display ads and offer a subscription option.
Meta has refused to comment on TechCrunch’s questions about how she plans to commercialize her Ai Chatbots, whether she plans to include ads or to provide answers and whether the company’s long -term strategy with AI comrades involves joining Horizon’s social virtual reality.
Do you have a sensitive advice or confidential documents? We mention the internal operation of the AI industry – from companies that shape their future to the people who are influenced by their decisions. Contact rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zef at maxwell.zeff@techcrunch.com. For safe communication, you can contact us via a signal at @rebeccabellan.491 and @mzeff.88.
