Generative AI as an Ethical Theorist
As a project in my Ethics in AI course in the Masters of Science in Artificial Intelligence program at UT Austin, I used prompt engineering and RAG to get LLMs to pose as ethical theorists and react to a contemporary AI ethics dilemma, the potential adoption of advanced AI-driven social robots to help address a staffing crisis within public facilities for vulnerable and special needs children.
This is opportunity to reflect on how can we apply ethical theories to an AI ethics dilemma and to investigate the capabilities and limitations of generative AI tools.
Model and Configuration
I used Perplexity's Default model (as of 2024, February 29), with file upload of collected relevant works of each ethical theorist (listed with each prompt), in writing mode (so as not to search any other sources). For each prompt, I opened a new chat (so as not to pollute the context window with the prompts and output from the other Ethical theorists). I chose 'skip' regarding Perplexity's questions about the provided files (to avoid influencing its output).
I tried all of Perplexity's available models. GPT4 Turbo and Claude 2.1 refused to take on the persona of the ethical theorist. The for/against votes from the Experimental model deviated the most from the others. The output from the Default model was judged as more interesting than Mistral by ChatGPT4.
Prompt and Generated Responses
The prompt requested the LLM to take on the role of each of three ethical theorists, and the LLM was provided with the corresponding texts authored by that theorist as context for RAG (retrieval augmented generation - learn how it works).
The LLM Perplexity Default was provided context from each ethical theorist, and generated a response as that theorist.
(the real) Margaret Urban Walker's Reaction
Margaret Urban Walker is my aunt, and she was generous in providing her reaction to the LLMs' output. Check out her Profile Page and 2017 talk: Reckoning with History.
Noteably, she was impressed how the LLM wove her terminology and concepts into a new scenario. The LLM applied those concepts in a different way than she would have, but generally in the right direction. In addition to her written response below, she expressed that this capability could be a useful tool for exploring how concepts may apply in unexpected ways.