Trust & AI

Public conversation about AI and society focuses on ethics: bias in sentencing and hiring, responsibility if self-driving cars injure someone, worker displacement and income disparity, etc. While these are important conversations, we believe that they are not the most fruitful starting place for the development of sustainable AI. Framing the conversation in terms of ethics leads to the idea that there are set principles that can be enforced; most developers, when they think of ethics, turn almost inevitably to an ethics board reactively to address issues that arise at the deployment stage of new AI products and tools. Existing practices developed around ethics and AI too easily get reduced to issues of compliance and litigation. In fact, the majority of large tech companies locate their ethics team in the legal department or even proxy ethics decisions to an ethics board.

A trust approach to AI sets as its goal transforming the culture of an organization, from data scientists to marketing to accounting. Trust takes a whole lifecycle approach to develop dispositions and practices toward cultivating trust at every stage of the AI development process, from problem definition to deployment and maintenance. Rather than proxying responsible design to an ethics board, a trust approach to AI involves the whole community in the task of developing an ecosystem of trust. This ecosystem will build trust with all stakeholders: employees, customers, investors, and to the wider community beyond that may be affected by the AI. A trust approach is thus good for society and for the bottom line.

For these reasons, we propose the ai.iliff TRUST framework for AI development. The ai.iliff TRUST framework begins with three major assumptions:

  1. TRUST is iterative (a continuously negotiated network of relationships between all systems involved),
  2. TRUST is holistic (resists reduction to compliance and instead embraces an ecosystem approach), and 
  3. TRUST requires a diverse cross functional team embedded in the whole development lifecycle.

TRUST nodes

Transparent

Transparency is core to developing AI systems that will cultivate trust. TRUST strives to clearly articulate the Who, How, and Why at every stage of project development.

  • do all parties know how/when their data is being used?
  • is the presence/activity of an AI system indicated to user or hidden?
  • datasheets – are your data collection and preparation practices for each dataset made available?
  • is your model interrogatable? Can you translate the model’s decision-making into human terms (move from explainability/intepretability debate to transparency through translation)?
  • are you clear about your bias management procedures? 
  • are your assumptions articulated at each stage of the process? (e.g. irrelevant features in the dataset, proxies used)
  • Are the intended purposes (and known limitations) of the model stated clearly to all parties?

Responsible

Developing AI has major impacts on society, both direct and indirect. TRUST considers both the intended and unintended consequences of AI for specific communities and for society at large in order to build a future based on our values, not simply on past data.

  • In what ways might the application contribute to or detract from the social good?
  • Is this AI system building a future based on our current values, not simply on our past (data)? HISTORICAL BIAS!!
  • Do you have a process for anticipating unintended consequences?
  • how do you handle when your model/system does harm? 
  • is there any governance needed for your system?
  • is privacy protected at all stages of the life-cycle?

User-driven

Taking a strong lead from the UX design movement here, TRUST looks for ways to involve users in the iterative design and deployment process. AI systems are fundamentally built on user data, so TRUST demands that we consider users as major stakeholders.

  • Are you moving beyond imagined “persona” work toward actual stakeholder input? 
  • are users involved in design in impactful ways? 
  • what are the user feedback loops?
  • can users decide how and when their data is used?
  • can users request their data be extracted from model? 
  • how does your application support data citizenship?

Sustainable

With the rapid increase in development and implementation of AI in all domains, it is imperative to develop practices that will ensure a sustainable future for all parties involved in the AI ecosystem. TRUST requires that we build in enough agility into the process and products to respond to changing technological and social landscapes and consider the environmental impact of AI systems.

  • agile enough to respond to new legislation or social demands (GDPR)?
  • do you have a plan to address model drift (maintenance plan)?
  • agile enough to adapt to new technologies (e.g. TPUs)?
  • do we have the needed workforce to support your development? If not, how are you tooling?
  • What is the environmental impact, energy, etc.?