Artificial intelligence (AI) technology is the most transformative technology of our time. Used by companies to control vast amounts of data and individuals to simplify daily routines, generative AI has far greater powers. Its ability to create brand-new content has led to questions about ethical applications and the need to regulate what AI can do. Balancing innovation with responsibility has become one of the leading issues for AI stakeholders, even prompting involvement by the Pope.
Key Ethical Issues in AI
If there was any doubt about the transformational powers of AI, the record speed of growth of chatbot ChatGPT demonstrated the demand for generative AI beyond doubt. Within five days of its launch, ChatGPT had attracted one million users. Within two months, the number had grown to 100 million, setting a record for the fastest-growing consumer application to date.
That record has now been surpassed by Meta’s Threads, but it remains impressive. The growth of generative AI is not only good news, however. Most groundbreaking technological changes come at a cost, and AI is no different. Its growth has generated a list of questions about the ethics behind the content.
Questions raised by both skeptics and users have revolved around bias and fairness, privacy, accountability, and transparency. When it comes to ethics and AI, the controversy begins long before users post results to their blogs and other outlets.
The training of AI chatbots such as ChatGPT raised questions about copyright and intellectual property. Experts concluded that in their race to dominate the marketplace for AI applications, companies like OpenAI (which owns ChatGPT), Google, and Meta all cut corners to stay ahead of their competition.
Balancing Innovation with Responsibility
The need to balance AI innovation with responsible business practices has become so urgent that the topic was a major consideration during the recent G7 summit of world leaders in Italy. In a session attended not only by the G7 leaders but also opened to representatives from other countries and the head of the Catholic church, attendees tried to get closer to creating a “regulatory, ethical and cultural framework” for AI.
Data bias is one of the key issues surrounding AI. Applications are only as powerful as the information used to train the application, which is why key players may have ignored copyright laws to improve their datasets. The controversy between authors and other creators and leading tech players has now led the U.S. Copyright Office to review how copyright law can be applied to generative AI.
Transparency is another concern with few users understanding how AI algorithms select the information they present or divulge their sources. Without this kind of information, it becomes near-impossible for users to identify false information, allowing the spread of misinformation and disinformation.
Potential invasions of privacy have also raised questions, especially when the use of facial recognition technologies blurs the line between security and unwarranted surveillance. Then there are questions of accountability, for example when AI is used to make or supplement medical diagnostics.
Decisions made by autonomous cars are another area where AI-based technologies are raising questions. Whose fault is it when an autonomous car fails to stop at a pedestrian crossing?
Frameworks and Guidelines
As early as 2020, a few years before generative AI was made available to the general public, the Harvard Business Review noted that AI was not only helping companies scale their business but also generated increased risks. At the time, AI-related ethics was just moving from being an obscure topic discussed by academics to something that the world’s biggest tech companies needed to care about.
Today, there is general agreement that AI needs to be regulated if the technology is to be harnessed for the good of humanity. However, there is no set framework that key industry players in the U.S., governments, and others agree on as yet.
In Europe, the European Parliament adopted its first AI Act earlier this year with requirements coming into force gradually over the next 24 months. Provisions differ depending on risk levels of individual AI applications, including unacceptable and high risk. Transparency requirements for generative AI tools include disclosing when AI was used and designing models in a way that prevents the creation of illegal content.
While these provisions may sound abstract, they will apply to the day-to-day running of countless businesses. Already, companies of all sizes are turning to ChatGPT to generate content for their digital marketing channels. Tools like Stay Social utilize AI to streamline social media content generation, saving companies time and money. The content generated by these and other tools will need to adhere to any forthcoming regulation.
The Role of Stakeholders
Developers, leading AI companies like OpenAI and Google, governments, and users all have a critical role to play in ensuring that AI is developed and used ethically. Starting with clearing up copyright disputes during AI training, developers need to ensure that AI-based applications are unable to create and distribute misinformation.
Governments will need to find ways to harness the economic power of AI without compromising access, introducing bias, or limiting access to certain groups of society. Users and consumers of AI-generated information should be able to see clearly how the information has been sourced.
Future Directions
The prevalence of AI in our society will continue to grow as different stakeholders explore its potential. As governments and groups like the Partnership on AI work to create an equitable and empowering version of AI, users will need to hold their providers accountable.
Generative AI tools like ChatGPT are early applications of this technology which is likely to transform our lives more than the advent of the internet. Positively leveraging this power will be critical to the ethical development of AI.
Conclusion
Balancing the excitement of new AI developments and ethical concerns surrounding their use have been hotly discussed topics for the past few years. With regulatory frameworks emerging, it remains critical to leverage AI ethically for the benefit of all humanity.
