If you are using AI in your city, and you probably should be, are you sure it doesn’t have a bias you don’t know about? It may be time to call the Algorithm Police.
The New York City Council has introduced a bill to create a task force to address instances where algorithms – or “automated decision systems used by agencies” – may have harmed people unfairly. From Techcrunch, here are some of the questions the task would seek to answer:
“How can people know whether or not they or their circumstances are being assessed algorithmically, and how should they be informed as to that process? Does a given system disproportionately impact certain groups, such as the elderly, immigrants, the disabled, minorities, etc?
If so, what should be done on behalf of an a ected group?
How does a given system function, both in terms of its technical details and in how the city
How should these systems and their training data be documented and archived?”