Responsible AI- why is that term no longer prevalent?
Back in the late 2010s, us data scientists at the enterprise were focused on responsible AI for traditional ML. Neural nets were the bane of our existence. Why are we all-in on deep learning and LLMs?
I remember this time period as a enterprise data scientist, where we focused on responsible AI and the traceability of the AI/ML model held precedence over black-box models such as neural nets and deep learning models. We favoured decision trees over deep learning models, just because we could trace how a model arrived at its outputs.
Fast forward to 2025, and we are in a mess- we now have deep learning models and LLMs deployed at many enterprises, and Responsible AI is no longer talked about. We would rather distract ourselves and label misaligned outputs of LLMs as ‘hallucinations,’ rather than focus on the critical work of debugging and controlling these LLMs and multi-agent AI systems. But how do you begin to control and oversight these LLMs and multi-agent AI systems, when deep learning models are a black box to begin with?
I opine we need to get back to focusing on Responsible AI, and this will help us to not jump in too fast with deep learning models that we don’t know how to govern yet. Let’s focus again on Responsible AI, and hence AI traceability and AI safety.