Explainable AI (original) (raw)
To trust AI systems, explanations can go a long way. We’re creating tools to help debug AI, where systems can explain what they’re doing. This includes training highly optimized, directly interpretable models, as well as explanations of black-box models and visualizations of neural network information flows.
Our work
Teaching AI models to improve themselves
IBM and RPI researchers demystify in-context learning in large language models
- AI
- AI Transparency
- Explainable AI
- Trustworthy AI
The latest AI safety method is a throwback to our maritime past
- AI
- AI Transparency
- Explainable AI
- Fairness, Accountability, Transparency
- Generative AI
Find and fix IT glitches before they crash the system
- AI for Code
- AI for IT
- Explainable AI
- Foundation Models
- Generative AI
What is retrieval-augmented generation?
- AI
- Explainable AI
- Generative AI
- Natural Language Processing
- Trustworthy Generation
Did an AI write that? If so, which one? Introducing the new field of AI forensics
- See more of our work on Explainable AI
Publications
- Daiki Kimura
- Naomi Simumba
- et al.
- Daiki Kimura
- 2024
- AGU 2024
- Radu Marinescu
- Junkyu Lee
- et al.
- Radu Marinescu
- 2024
- NeurIPS 2024
- Kumudu Geethan Karunaratne
- Michael Hersche
- et al.
- Kumudu Geethan Karunaratne
- 2024
- NeurIPS 2024
- Lucas Monteiro Paes
- Dennis Wei
- et al.
- Lucas Monteiro Paes
- 2024
- NeurIPS 2024
- Somin Wadhwa
- Oktie Hassanzadeh
- et al.
- Somin Wadhwa
- 2024
- ISWC 2024
- Lior Limonad
- Fabiana Fournier
- et al.
- Lior Limonad
- 2024
- ECAI 2024
Related topics
CausalityFairness, Accountability, TransparencyHealthcare and Life SciencesAI for Business Automation