The social platform x Will try a feature This allows AI Chatbots to create Community notes.
Community notes are a Twitter feature that Elon Musk has expanded under the ownership of the service, now called X. Users who are part of this event control program can contribute comments that add a framework to certain positions, which are then controlled by other users before appearing. A Community note may appear, for example, in a position of a video created by AI that is not clear about its synthetic origin or as an addition to a misleading position by a politician.
The notes are made public when they reach consensus between groups that have historically disagreed with previous evaluations.
Community notes were sufficiently successful in X to inspire Meta, Tiktok and YouTube To pursue similar initiatives-META eliminated the third-party programs that controls a total of low-cost work, which comes from the community.
But it remains to be seen whether the use of AI Chatbots as control of events will prove useful or harmful.
These AI notes can be created using X Grok or using other AI tools and connecting them to x via an API. Any note that it submits an AI will be treated the same as a note submitted by a person, which means that it will go through the same control procedure to encourage accuracy.
The use of AI actually appears doubtful, given how common AIS is to give up or compose the frame that is not based on reality.
According to one paper published This week by researchers working in Community notes X, it is recommended that people and LLMS work in parallel. Human feedback can enhance the production of AI points through aid learning, with human evaluators remaining as a final check before posting notes.
“The goal is not to create an AI assistant who tells users what to think about, but to build an ecosystem that enables people to think more critically and understand the world better,” the paper says. “LLMS and people can work together in a virtuous loop.”
Even with human controls, there is still a risk of being greatly relying on AI, especially because users will be able to incorporate LLMS from third parties. Openai’s chatgpt, for example, recently experienced issues with a model being too sycophantic. If an LLM prioritizes “service” to accurately complete a check, then comments produced by AI may end up flat inaccurate.
There is also concern that human judges will be overloaded by the amount of observations produced by AI, reducing their motivation to adequately complete this voluntary project.
Users should not wait to see the Community notes created by AI-X plans to test these AI contributions for a few weeks before launching them more widely if they are successful.
