A new tool to ensure AI in workplaces is acceptable and safe

Story written by Dr Sarah Keenihan, AIML
Artificial intelligence (AI) promises a lot when it comes to improving workplace efficiency. Technology mundane tasks in administration, provide better pricing recommendations to sellers or streamline recruitment, as just a few examples.
But what about the people involved? How will AI affect them? from the Australian Institute for Machine Learning is developing a tool to find out.
Working with the South Australian Centre for Economic Studies, the , and the , Zygmunt is exploring what sorts of health and safety risks in workers might arise if AI is incorporated into a workplace.
鈥淲e鈥檝e already got systems in place to manage the physical risks of technology 鈥 for example, safety around robots in vehicle production lines and automated dispatch centres for large retailers like Amazon,鈥 said Zygmunt.
鈥淲hat we don鈥檛 have yet are processes in place to manage the psychosocial risks of technology.鈥
Here, Zygmunt is referring to aspects of work such as disruption in daily routines or reductions in human autonomy. What might it feel like if an algorithm is creating work rosters, and you鈥檝e got a reduced opportunity to have a say in how your weekly hours are scheduled?
鈥淲hat we鈥檙e trying to do is develop some sort of scorecard where people making decisions about adopting AI in a workplace can do an impact assessment and identify any potential issues that might arise,鈥 said Zygmunt.
The starting points for this work are ethics principles around the use of AI that various countries and organisations have developed in recent years 鈥 including things like accountab