Is the GPT-3 ethical or not?
The more we look around, the more we see the prevalence and increasing growth of Artificial Intelligence in everyday life. Depends on where you look and to the way in which the AI technology is applied, this can either be a positive, life-enhancing occurrence or a negative, anxiety-causing phenomenon.
As it is, human beings are engaging with machines and “nonhuman” communication daily – sometimes without even knowing it. For customer service issues, more than 67% of consumers worldwide used a chatbot for customer support in the past year; by 2021, it’s predicted that 85% of customer interaction will be handled without human agents. Bots infiltrate social media too – Facebook reported blocking more than three billion fake accounts over a six-month period.
Now imagine that you’re online searching for information and reading blog posts on a variety of issues. For long-form, perfectly crafted articles – ones in which the author even formulates opinions and advice – it’s logical to assume that the author of such content has a rational, creative-thinking human brain.
But what if the author or poster is, in fact, not human?
OpenAI recently released the API of its latest language model, GPT-3, in beta form. With this tool, some developers have begun to show that this platform is capable of generating content that anyone can understand just by giving it commands in English.
The technology is so precise and easy to use that anyone can input an English command like “create a webpage with a polka-dot theme” and GPT-3 will generate an HTML code for it. You can write two or three sentences of an article and GPT-3 will write the rest of the article. Or you can generate conversations and the answers will be based on the context of the previous questions and answers.
As stated by OpenAI, the purpose of developing this API was to “greatly lower the barrier to producing beneficial AI-powered products, resulting in tools and services that are hard to imagine today.” Making AI more accessible and easy to use is a well-intentioned advancement on their part. And while the intended use of this technology is hopeful about benefitting society, it is only in beta form because its potential power and actual use pose many questions.
At Citibeats, because we’re dealing in the world of AI and groundbreaking technology, we’re surrounded by a lot of facts, numbers and data. And while these are all extremely important and essential components of our mission, we believe that it’s extremely important to harness data for development and inclusion —as a critical cross-sectoral urban issue for the next decade and beyond.
We work daily with the vision of envisioning the future of cities, governance and empowered citizens in an ideal future –beyond the facts and figures.
What “utopia” does our collective imagination conjure?
We’re in alignment with some of the most progressive and, in our opinion, most brilliant thinkers and innovators of our time.
Do you want to know how we have helped to create better policies, more effective budgets and earlier interventions with Artificial Intelligence?
While the technology is extremely impressive, one can’t help but wonder – is this an ethical use of AI?
As we see it, GPT-3’s use can be highly problematic. The main reason is that the immediate interest appears to be the creation of spam of such high quality that people will not notice the difference. Aside from delivering the final knockout blow to half of the content industry, it opens up limitless possibilities for bots, who can tirelessly spam or trolle anyone. Unlike a human trolling on Facebook – who needs to sleep, take breaks, etc – a trolling bot has no limit.
In the long run, you run the risk of negating the right to freedom of expression if human beings now have to compete with the artificial noise generated by one of these bots – ones with the ability to write eight thousand messages per second.
For us, in order for AI to be ethical, it must be transparent and inclusive on every level. GPT-3 doesn’t seem to possess these traits. On the flip side, will access to this technology democratize its application and allow more people to come up with solutions that could, in fact, benefit society? We are optimists in the hopes that this would be the case, but we are also realists.
The bottom line, this is a highly debatable issue with many possible roads this API could take. As such, the beta phase will hopefully bring to light the ways the scale will tip in its application – for good or for bad.