Public conversation about AI and society focuses on ethics: bias in sentencing and hiring, responsibility if self-driving cars injure someone, worker displacement and income disparity, etc. While these are important conversations, we believe that they are not the most fruitful starting place for the development of sustainable AI. Framing the conversation in terms of ethics leads to the idea that there are set principles that can be enforced; most developers, when they think of ethics, turn almost inevitably to an ethics board reactively to address issues that arise at the deployment stage of new AI products and tools. Existing practices developed around ethics and AI too easily get reduced to issues of compliance and litigation. In fact, the majority of large tech companies locate their ethics team in the legal department or even proxy ethics decisions to an ethics board.
A trust approach to AI sets as its goal transforming the culture of an organization, from data scientists to marketing to accounting. Trust takes a whole lifecycle approach to develop dispositions and practices toward cultivating trust at every stage of the AI development process, from problem definition to deployment and maintenance. Rather than proxying responsible design to an ethics board, a trust approach to AI involves the whole community in the task of developing an ecosystem of trust. This ecosystem will build trust with all stakeholders: employees, customers, investors, and to the wider community beyond that may be affected by the AI. A trust approach is thus good for society and for the bottom line.
For these reasons, we propose the ai.iliff TRUST framework for AI development. The ai.iliff TRUST framework begins with three major assumptions:
Transparency is core to developing AI systems that will cultivate trust. TRUST strives to clearly articulate the Who, How, and Why at every stage of project development.
Developing AI has major impacts on society, both direct and indirect. TRUST considers both the intended and unintended consequences of AI for specific communities and for society at large in order to build a future based on our values, not simply on past data.
Taking a strong lead from the UX design movement here, TRUST looks for ways to involve users in the iterative design and deployment process. AI systems are fundamentally built on user data, so TRUST demands that we consider users as major stakeholders.
With the rapid increase in development and implementation of AI in all domains, it is imperative to develop practices that will ensure a sustainable future for all parties involved in the AI ecosystem. TRUST requires that we build in enough agility into the process and products to respond to changing technological and social landscapes and consider the environmental impact of AI systems.