Openai CEO Sam Altman Sequoia earlier this month.
When asked by a participant about how Chatgpt can become more personalized, Altman replied that he finally wants the model to substantiate and remember everything in a person’s life.
The ideal, he said, is a “very tiny model of trillion environment of the environment that you put your whole life”.
“This model can be held accountable throughout your context and do it effectively and any discussion you have ever had in your life, any book you have ever read, any email you have ever read.
“Your company is doing the same thing for all your company data,” he added.
Altman may have a reason to believe that this is the natural future of Chatgpt. In the same discussion, when they were asked for cool ways that young people use chatgpt, he said: “People in college use it as a operating system.” They download files, connect data sources, and then use “complex prompts” against these data.
In addition, with Chatgpt memory options – which can use previous conversations and memorize the facts as a context – he said that a trend he has noticed is that young people “do not really make life decisions without asking chatgpt”.
“A gross overflowing is: the elderly use chatgpt as, like a Google replacement,” he said. “People in their 20s and 30s use it as a life consultant.”
It’s not a jump to see how the chatgpt could become an AI system that knows. Combined with the agents that the valley is trying to build, this is an exciting future to think.
Imagine that your AI is automatically planning your car oil and reminds you. Planning the trip required for a wedding out of town and ordering the gift from the register. Or preparing the next volume of the series of books you have been reading for years.
But the scary place? How much do we have to trust a great technology company to know everything about our lives? These are companies that do not always behave in models.
Google, who began his life under the slogan “Do Not Be Evil”, lost a trial in the US who accused her of participating in anti -tantamy, monopoly behavior.
Chatbots can be trained to respond with political motivation. Not only Chinese bots were found to comply with China’s censorship requirements, but Xai’s Chatbot Grok this week discussed a “white genocide” of South Africa when people asked for it completely irrelevant questions. The behavior, A lot of markedI was implicit intentional manipulation of its response machine under the order of the founder of South Africa, Elon Musk.
Last month, Chatgpt became so enjoyable that it was completely sycophantic. Users began sharing snapshots of the bot displaced, even in trouble, even dangerous decisions and ideas. Altman responded quickly promises that the team had determined the sting that caused the problem.
Even the best, more reliable models are still just completely Do things from time to time.
Thus, having a well -known AI assistant could help our lives in ways we can only start seeing. But given the long history of Big Tech’s great behavior, this is also a mature condition for abuse.
