I am submitting my project following an invitation from the Ready Tensor team, who kindly noticed my work on GitHub.
This study tackles the ethical challenges faced by autonomous vehicles, particularly in life or death decisions following a fatality. The central question is whether AI can make morally aligned decisions similar to human preferences, especially in scenarios like the "trolley problem." A rule based ethical simulator was developed to model these decisions, incorporating factors such as ethical value, age, and AI preferences for younger or older individuals. A survey of 150 participants compared human choices with AI decisions, revealing significant alignment between the two. These findings suggest that AI can be programmed to make decisions consistent with human ethical inclinations, advancing the development of ethically responsible autonomous driving systems.
For detailed documentation, please refer to the PDF below:
There are no models linked
There are no models linked
There are no datasets linked
There are no datasets linked