2024 has arrived, and with it, a renewed interest in artificial intelligence, which looks likely to continue to enjoy at least moderate hype throughout the year. Of course, it’s cheered by billionaire technologists and buffoons in their cozy islands of influence, mostly in Silicon Valley – and scoffed at by mythologists who stand to benefit from painting the still-fanciful artificial general intelligence (AGI) as humanity’s tail-daddy for the centuries.
Both of these positions are exaggerated and unfounded. Speed without care only leads to compounding problems that proponents often suggest are best solved by use more speed, possibly in a different direction, to reach some idealized future state where the problems of the past are wiped out by the super-powerful Next Big Thing of the future. Calls to abandon or roll back entire areas of innovation, meanwhile, ignore the complexities of a globalized world where cats generally can’t be put back into boxes globally, among many, many other issues with this kind of approach.
The long, exciting and tumultuous history of technology development, particularly in the era of the personal computer and the Internet, has shown us that in our fervor for something new, we often neglect to stop and ask “but what is new is something they want or they need”. We never stopped asking that question with things like Facebook, and they ended up becoming an integral part of the fabric of society, a highly manipulable but equally essential part of creating and sharing in community dialogue.
Here’s the main takeaway from the rise of social media to take with us into the age of artificial intelligence: Just because something is easier or more convenient doesn’t make it preferable — or even desirable.
LLM-based so-called “AI” has already infiltrated our lives in ways that will likely prove impossible to reverse even if we wanted to, but that doesn’t mean we should indulge in the escalation that some see as inevitable, where we relentlessly eliminate the human equivalents of some of the gigs that AI is already good at or shows promise in order to pave the way for the necessary “incremental march of progress”.
The oft-repeated counter to fears that increased automation or the outsourcing of menial work to artificial intelligence agents is that it will always leave more time for humans to focus on “quality” work, like setting aside a few hours a day to fill in Excel . Spreadsheets will finally free the office manager who did this work to compose the great deal locked inside them, or allow the original graphic designer who fixed the photos to create a permanent cure for COVID.
In the end, automating menial work may look good on paper, and it may also serve the top executives and stakeholders behind an organization through improved efficiency and reduced costs, but it does not serve the people who may actually enjoy doing it. this job, or who at least doesn’t mind it as part of the overall mix that makes up a work life balanced between more taxing and rewarding creative/strategic exercises and low-intensity daily tasks. And the long-term consequence of having fewer people doing this kind of work is that you’ll have fewer overall who are able to meaningfully participate in the economy—which is ultimately bad even for those at the top of the pyramid who reap the immediate benefits of AI efficiency gains;
Utopian technocracy always fails to recognize that most of humanity (including technocrats) is sometimes lazy, messy, disorganized, inefficient, error-prone, and mostly content to achieve comfort and avoid boredom or of damage. This may not sound that ambitious to some, but I say it with celebratory fervor, since to me all these human qualities are just as praiseworthy as the less attainable ones like drive, ambition, wealth and success.
I am not arguing against stopping or even slowing down the development of a promising new technology, including LLM-based genetic artificial intelligence. And to be clear, where the consequences are clearly beneficial – e.g. development of medical image diagnostic technology that far exceeds the accuracy of trained human reviewers, or development of self-driving car technology that can actually drastically reduce the frequency of traffic accidents and loss of life — there is no compelling argument for moving away from the use of said technology.
But in almost all cases where the benefits are portrayed as efficiency gains for tasks that are far from life-or-death, I’d argue that it’s worth a long, hard look at whether we should even bother in the first place. Yes, human time is precious and gaining some of it is great, but assuming this is always a net positive ignores the complex nature of being human and how we measure and feel our worth. Saving that much time to someone who no longer feels like they are making a meaningful contribution to society is not a blessing, no matter how eloquently you think you can argue that they should use that time to become a violin virtuoso or learn Japanese.