In Silicon Valley, some of the brightest minds believe a universal basic income (UBI) that guarantees people unrestricted cash payments will help them to survive and thrive as advanced technologies eliminate more careers as we know them, from white collar and creative jobs — lawyers, journalists, artists, software engineers — to labor roles. The idea has gained enough traction that dozens of guaranteed income programs have been started in U.S. cities since 2020.
Yet even Sam Altman, the CEO of OpenAI and one of the highest-profile proponents of UBI, doesn’t believe that it’s a complete solution. As he said during a sit-down earlier this year, “I think it is a little part of the solution. I think it’s great. I think as [advanced artificial intelligence] participates more and more in the economy, we should distribute wealth and resources much more than we have and that will be important over time. But I don’t think that’s going to solve the problem. I don’t think that’s going to give people meaning, I don’t think it means people are going to entirely stop trying to create and do new things and whatever else. So I would consider it an enabling technology, but not a plan for society.”
The question begged is what a plan for society should then look like, and computer scientist Jaron Lanier, a founder in the field of virtual reality, writes in this week’s New Yorker that “data dignity” could be an even bigger part of the solution.
Here’s the basic premise: Right now, we mostly give our data for free in exchange for free services. Lanier argues that in the age of AI, we need to stop doing this, that the powerful models currently working their way into society need instead to “be connected with the humans” who give them so much to ingest and learn from in the first place.
The idea is for people to “get paid for what they create, even when it is filtered and recombined” into something that’s unrecognizable.
The concept isn’t brand new, with Lanier first introducing the notion of data dignity in a 2018 Harvard Business Review piece titled, “A Blueprint for a Better Digital Society.”
As he wrote at the time with co-author and economist Glen Weyl, “[R]hetoric from the tech sector suggests a coming wave of underemployment due to artificial intelligence (AI) and automation.” But the predictions of UBI advocates “leave room for only two outcomes,” and they’re extreme, Lanier and Weyl observed. “Either there will be mass poverty despite technological advances, or much wealth will have to be taken under central, national control through a social wealth fund to provide citizens a universal basic income.”
The problem is that both “hyper-concentrate power and undermine or ignore the value of data creators,” they wrote.
Of course, assigning people the right amount of credit for their countless contributions to everything that exists online is not a minor challenge. Lanier acknowledges that even data-dignity researchers can’t agree on how to disentangle everything that AI models have absorbed or how detailed an accounting should be attempted. Still, Lanier thinks that it could be done — gradually.
Alas, even if there is a will, a more immediate challenge — lack of access — is a lot to overcome. Though OpenAI had released some of its training data in previous years, it has since closed the kimono completely. When OpenAI President Greg Brockman described to TechCrunch last month the training data for OpenAI’s latest and most powerful large language model, GPT-4, he said it derived from a “variety of licensed, created, and publicly available data sources, which may include publicly available personal information,” but he declined to offer anything more specific.
Unsurprisingly, regulators are grappling with what to do. OpenAI — whose technology in particular is spreading like wildfire — is already in the crosshairs of a growing number of countries, including the Italian authority, which has blocked the use of its popular ChatGPT chatbot. French, German, Irish, and Canadian data regulators are also investigating how it collects and uses data.
But as Margaret Mitchell, an AI researcher who was formerly Google’s AI ethics co-lead, tells the outlet Technology Review, it might be nearly impossible at this point for all these companies to identify individuals’ data and remove it from their models.
As explained by the outlet: OpenAI would be better off today if it had built in data record-keeping from the start, but it’s standard in the AI industry to build data sets for AI models by scraping the web indiscriminately and then outsourcing some of the clean-up of that data.
If these players have a limited understanding of what’s now in their models, that’s a daunting challenge to the “data dignity” proposal of Lanier.
Whether it renders it impossible is something only time will tell.
Certainly, there is merit in determining some way to give people ownership over their work, even if that work is made outwardly “other” by the time a large language model has chewed through it.
It’s also highly likely that frustration over who owns what will grow as more of the world is reshaped by these new tools. Already, OpenAI and others are facing numerous and wide-ranging copyright infringement lawsuits over whether or not they have the right to scrape the entire internet to feed their algorithms.
Either way, it’s not just about giving credit where it’s due. Recognizing people’s contribution to AI systems may be necessary to preserve humans’ sanity over time, suggests Lanier in his New Yorker piece.
He believes that people need agency, and as he sees it, universal basic income “amounts to putting everyone on the dole in order to preserve the idea of black-box artificial intelligence.”
Meanwhile, ending the “black box nature of our current AI models” would make an accounting of people’s contributions easier — which would make them more inclined to stay engaged and continue making contributions.
It might all boil down to establishing a new creative class instead of a new dependent class, he writes. And which would you prefer to be a part of?
Source @TechCrunch