Iliff Artificial Intelligence
Institute

Building a sustainable future for humans and machines

TRUST framework

A trust approach to AI sets as its goal transforming the culture of an organization, from data scientists to marketing to accounting. Trust takes a whole lifecycle approach to develop dispositions and practices toward cultivating trust at every stage of the AI development process, from problem definition to deployment and maintenance. Rather than proxying responsible design to an ethics board, a trust approach to AI involves the whole community in the task of developing an ecosystem of trust. 

Learning Partner

The ai.iliff learning partner aims to make humanities education more widely available, more efficient, more diversified, and more individualized than humanities education has been to date. The learning partner dynamically engages students by constantly evaluating what concepts have been learned and nudging students toward concepts and perspectives that have not been raised in the discussion. The learning partner will eventually be made available in learning management systems such as Canvas.

Bias Management

Bias in artificial intelligence is connected to bias in humans. How can our partnership with machines make us more aware of the bias operative in society and help us to select biases that reflect our desired future rather than replicate our behavior in the past? We are developing a set of practices that will make bias more visible in all stages of the AI development lifecycle so that we can make values based decisions in adjusting and managing this bias.

Educational Opportunities

Much of the hype and fear around the emergence of AI in so many areas of society can be addressed through education. We design and deliver custom learning opportunities in areas related to AI and Society, Data Citizenship, and Human-AI interface. We can provide online, hybrid, or on site learning opportunities.

Core Team

Alires J. Almon, M.A.

Partner Director

Alires has a deep passion for the opportunities created by advanced science and technology. She feels it is important to ensure that the social and cultural impacts of technology are addressed as equally as the technical advancements themselves. Along with her work at ai.iliff, Alires is the Director of Innovation for the Mental Health Center of Denver and founder of Deep Space Predictive Research Group, LLC.

Michael P. Hemenway, Ph.D.

Partner Director

For two decades, Michael has been helping organizations build more sustainable partnerships with technologies. Whether systems integrations, database administration, solutions architecture, or instructional design, Michael enjoys the complex sociotechnical dynamics at play as we learn to work with people and machines in more collaborative ways. Michael’s primary research interests include transparent computing, human-computer interface, and what humans can learn from how machines think.

Justin O. Barber, Ph.D

AI Engineer

Justin began researching artificial intelligence and machine learning while he was completing a doctorate in the humanities. His areas of research include autonomous learning systems, language generation, natural language processing, and deep learning. He is particularly interested in how technology continues to transform the way human beings live, learn, and understand the world.

Theodore Vial, Ph.D

Senior Researcher

Ted began using machine learning in his research a couple of years ago. He fell in love with the questions AI raises about what it means to be human, and is eager to bring the resources of philosophy and theology to the development of technology in ways that make the technology better and build a future in which humans and machines flourish together.

Get in touch