Breaking News
Generative AI: What’s all the noise?
Insights provided by Tintra
Recently, a media storm has erupted with concerns surrounding the potential harmful consequences of introducing generative AI to the public. This uproar has highlighted the need for awareness of not only of the potential for innovation in the development of new technologies, but also for critical insight into the multitude of risks associated with their introduction.
In recent years, the development of generative AI has begun to redefine the possibilities for artificial intelligence, namely in the field of human creativity itself. What was once confined to the research labs of academia and industry is now available to the public and is proliferating on a global scale at an unprecedented speed.Capable of creating new data or content that has not been explicitly defined by a human expert, generative AI uses the patterns and relationships it has learned from the training data to generate new data that is similar to this source material. As it is not limited to a fixed set of predefined rules, generative AI can also be used to generate data that is more diverse and creative than the original training data. The rise of generative chatbots like Chat-GPT is a prime example of this rapid transformation.
With over 100 million users in less than two months, Chat-GPT has surpassed the growth rates of other popular platforms such as TikTok, which took nine months to reach the same milestone, and Facebook, which took a staggering 4.5 years. This impressive growth highlights the immense potential of generative AI and its ability to rapidly gain widespread adoption. Although generative AI has the potential to revolutionise many fields, including art, music, and literature, its recent widespread use has begun to highlight a number of significant issues which must be addressed to ensure ethical, responsible, and effective adoption.
Let’s explore:
Algorithmic Bias
One of the most pressing concerns is the introduction of harmful bias into an AI system. Algorithmic bias refers to the tendency of machine learning algorithms to produce results that are systematically prejudiced or discriminatory towards certain groups of people.For example, an algorithm used to screen job applicants may unfairly discriminate against people with certain demographic characteristics, such as race or gender. This can lead to unfair treatment and perpetuate existing social inequalities. From algorithm design to the composition of the dataset used for training, the presence of cultural biases can systematically encode cultural values and privilege certain groups over others. This has led to the introduction of prejudicial discrimination in AI systems, creating prejudice against groups at scale. For example, the Gender Shades project by Buolamwini & Gebru, 2018, highlighted how facial recognition systems systematically performed worse on darker shades of skin and least well on darker female faces. When certain groups are less well represented in the training data used for system development, there are a range of effects for downstream applications.
Quality and Diversity of Data
As the dominance of data-driven systems has increased, it is important to remember that data is not neutral. Rather, its composition, how it is collected, selected, and whose voices it includes, affect both system performance and how equity is modelled in these systems. These models underpin the many technologies that permeate our everyday lives and can not only reproduce but also amplify bias and discrimination. The use of large datasets is often necessary to train models, which means datasets are often sourced to prioritise sample size. Greater emphasis is needed on the quality of training data and the values that inform data collection and curation strategies. Similar issues have been observed in the context of healthcare. For example, many health datasets do not adequately represent different demographic groups.
In publicly available healthcare datasets, incomplete demographic reporting was observed, and they were disproportionately collected from a small number of high-income countries. For skin cancer datasets, only 2% of datasets reported clinically relevant, key demographic information, such as ethnicity and skin tone. A starting point to address this problem is through specifically curating diversity into the training data. This can be achieved by sourcing data from a variety of different perspectives and viewpoints to help mitigate the risk of generating output that is overly influenced by a particular bias.
Plagiarism and Identity Fraud
Another major concern with generative AI is its potential use for plagiarism, and more worryingly, identity fraud. The use of generative AI in this manner has become commonly known as a deep fake. Deep fakes can be used to manipulate video footage to create a false identity that is indistinguishable from the real person. This could lead to serious consequences in a variety of contexts, such as political propaganda, online harassment, or even financial fraud. In some cases, deep fakes could be used to create fake evidence in a legal proceeding or to harm someone’s reputation by falsely attributing statements or actions to them. Several possible methods have begun to be adopted for combatting identity fraud through the use of deep fakes. One method currently being adopted is the investment into advanced deep fake detection tools. These tools can analyse ID documents and videos submitted by customers and determine whether they are authentic or not. However, approaching this issue through deep fake detection alone is problematic in that is creates an arms race between the fraudulent individuals developing deep fake technologies and the institutions trying to combat them.
Conclusion
As generative AI continues to evolve, it is crucial that we address potential concerns to ensure its ethical, responsible, and effective use. This new technology offers immense potential to many industries, but it is only through careful ethical and cultural consideration that we can fully realize its benefits and avoid the potential for unintended harm.
To ensure the development and use of generative AI – which is equitable to all – ethical frameworks must be followed throughout the entire research and development process, from designing the technology with the involvement of relevant stakeholders, to selecting, labelling, and structuring datasets. In addition, constant monitoring is necessary to identify and address any inherent biases during the design process, as well as the outputs of the system. By adopting these measures, we can help to ensure that the promises of generative AI are realized in a way that advances, rather than risks furthering harm to human well-being and social equity.
Companies In This Post
- Next Insurance Acquired for $2.6 Billion in Cash Read more
- Crypto Meets Forex: Finrax and FXBO Revolutionize Payment Solutions for Brokers Read more
- Fiserv Announces the Appointment of Stephanie Cohen to Board of Directors Read more
- DTCC Joins ERC3643 Association Read more
- Tastytrade Expands Crypto Trading With New Digital Assets, Powered by Zero Hash Read more