Artificial intelligence – or rather, the variety based on large language models that fascinate us at the moment – is already in the autumn of the publicity cycle, but unlike cryptography, it will not disappear into the dark, undignified corners of the Internet once The “trend” status is fading. Instead, it settles in a place where its use is already commonplace, even for purposes for which it is frankly inappropriate. Doomerism would have you believe that AI will become so intelligent that it will enslave or sunset humanity, but the reality is that it is far more threatening as a ubiquitous layer of errors and delusions seeping into our collective spiritual basements waters.
The doomerism vs. e/acc debate continues apace, with all the grounded, fact-based arguments on either side that you’d expect from Silicon Valley’s famously down-to-earth elites. Key context for any of these influencers is to remember that they spend their entire careers praising/decrying the extreme success or failure of whatever technology they’re betting on or against – only to say that the technology usually doesn’t make it through perfectly or catastrophically. Watch everything always, forever, but if you’re looking for details, self-driving is very convenient recently, as is VR and the metaverse.
Utopian and dystopian conversations in technology always do what they’re actually intended to do, which is distract from having real conversations about the real, present-day impact of technology as it’s actually being developed and used. AI has undoubtedly had a huge impact, particularly since the introduction of ChatGPT a little over a year ago, but that impact isn’t about whether we’ve unwittingly sown the seeds for a virtual deity, it’s how ChatGPT has proven to be so much more popular , more viral and catchier than its creators thought possible—even if its capabilities actually lived up to their relatively modest expectations.
The use of genetic artificial intelligence, according to the latest studies, is quite widespread and increasing, especially among younger users. Top uses aren’t novel or fun, per se Salesforce usage study over the past year; Instead, it is largely used to automate tasks and communications. With a few rare exceptions, such as when used to prepare legal arguments, the consequences of a slight illusion of artificial intelligence in the creation of these communications and corporate drudgery are insignificant, but it also undoubtedly leads to a digital layer consisting of factual errors that are easy to get lost. and minor inaccuracies.
This is not to say that humans are particularly good at disseminating information without factual errors. Quite the opposite, in fact, as we’ve seen through the rise of the disinformation economy on social media, particularly in the years leading up to and including the Trump presidency. Even if we set aside malicious agendas and intentional actions, error is simply an integral part of human belief and communication, and thus has always permeated common reservoirs of knowledge.
The difference is that LLM-based AI models do this loosely, continuously, and without self-reflection, and they do so with a sheen of authoritative confidence that users are sensitive to due to years of relatively stable, real, and reliable Google search results (admittedly, “about” does a lot of work here). Early on, search results and online repositories of information aggregated together were met with a healthy dose of critical skepticism, but years or even decades of reasonably reliable information provided by Google search, Wikipedia, and the like have short-circuited our distrust of the things that come back when we type a query into a text box on the web.
I think the effects of creating massive amounts of content of dubious accuracy for humble everyday communication of ChatGPT and its ilk will be subtle, but worth exploring and possibly mitigating. The first step would be to consider why people feel they can trust so many of these things to AI in its current state. with any widespread automation of tasks, the main focus of research should probably be on the task, not the automation. In any case, however, the real, mind-blowing big changes brought about by AI are already here, and while they’re nothing like Skynet, they deserve more study than possibilities based on techno-optimistic dreams come true.