The role of humans in AI ethics

Before considering how the responses of AI models are constructed, it is important to remember that LLMs lack internal thought or reasoning that shape their responses. They create text simply based on patterns in language that they have learned after examining tons of language data for a long time. However, which responses a model actually gives in response to a user is largely determined by the human feedback it has received during the fine-tuning phase, which is a product of the values and aims of the AI model developer. OpenAI, the developer of ChatGPT, stated that it had three guiding principles for the fine-tuning of their models  (OpenAI, 2022):

However, journalists have found that other developers, such as Microsoft, have put other values such as persuasiveness higher than truthfulness on their priorities list (New York Times, 2023).

Therefore, it is important to remember that AI-generated content is rooted in human choices, values, and flaws. By understanding the process of how responses are generated, we hope you can (begin to) understand that AI models are not all-knowing machines that can solve any problem given to them, but should be understood as tools that can fail from time to time.

 

‘A helpful, truthful, and harmless generative AI’, Microsoft Designer.