Our tech series events bring together AI industry experts, AI practitioners, and academics to discuss and develop trust-based practices that will shape our emerging AI ecosystem. Whether you work in education, health care, technology, insurance, space exploration, etc., these gatherings can speak to your AI strategies and workflows.
Leveraging years of research on machine learning, the ai.iliff team is developing tools and practices to bring trust-based AI to the education space. We are developing an AI tutor that will increase the efficiency and efficacy of the study of humanities disciplines by tailoring the educational experience to students based upon their particular (cultural, social, economic, etc.) backgrounds as well as their strengths and weaknesses in a subject. The AI tutor will diversify the perspectives offered in online courses by drawing from a wider array of perspectives than generally appear in within the boundaries of a single course.
Our team offers workshops and curriculum in bias management for machine learning workflows. Building on the agile ethics framework developed by integrate.ai and exploring tools such as LIME and IBM’s AI Fairness 360 Toolkit, we provide trust-based practices for each phase of the machine learning life cycle from problem definition through deployment and maintenance. For a hands on and case study based approach to developing capacities in bias management for machine learning, contact us for more information.