Critical Path Considerations for Deploying AI/ML Solutions in Production
Sharing and publishing the outputs of AI/ML models at the Enterprise requires an Enterprise AI/ML platform. Otherwise, your ML models sit in your local environment, with no value to the Enterprise.
At many Enterprise organizations with low AI maturity, there is an over-reliance on AI researchers and local AI prototypes are mistaken for complete ML systems. The fancy dashboards and visualizations and the predictions and forecasts impress the executives and payors, and soon everyone wants interactive dashboards for data driven decision making.
But as soon as they attempt to scale and repeat it for other Lines of Business in the Enterprise, the manual, siloed processes that worked for their local AI application, doesn’t work so well in other areas. Because there is no way to easily and seamlessly deploy their local AI solution to the Enterprise, local AI prototyping teams stay in their siloes, and the full impact of their solutions are never realized at the Enterprise.
And at these AI immature Enterprises, the majority of data scientists’ ML models fail to be deployed outside their local environments, and many leave as the impact of their hard work is never realized at the Enterprise.
This is where a Product Team can address the scaling of local AI applications at the Enterprise, and the crown jewel of such Product Team would be the build of an Enterprise AI/ML platform.
Sure, you need a use case that has business impact and executive approval and backing. But how do you actually go about building a complete ML system on which this local AI solution will be deployed in production?
The long route is to wait for the delivery of all of the Enterprise artifacts needed to deploy AI solutions in production. Imagine all of the backlog items needed to shift from exploration phase to the development and production phases. Waiting for all of these Enterprise artifacts to be delivered before your first ML model is operationalized can take years, and by then, the original business problem is no longer valid.
So how to you deploy in production in a timely manner, while also helping to build the infrastructure needed to make these Enterprise AI solutions? The solution is forming a Product team at the executive level, and utilizing a top down approach and start with an AI vision and strategy, and set a timeline for delivering. So we shift from memos to demos. All with executive support, and with an impactful business use case.
The Product team needs to be small to make it agile, but you need the right expertise to make it work for the Enterprise, being careful not to make it another siloed AI team in your organization. While it’s obvious you need your best data scientist on the team, this is only for the project. You also need an Enterprise data scientist who has knowledge of all of the Enterprise artifacts needed to make an AI project production-grade, but may not have a complete toolbox of Enterprise artifacts delivered, as this may be the same person who is tasked with leading the delivery of these Enterprise artifacts. So after you pair up the project and Enterprise data scientists, you need a data analyst/domain SME, data engineer, ML engineer, product manager, and project manager to get this Product team moving.
After you assemble the Product Team, it’s time to look at the critical path considerations that are needed to get from our current state (local AI solution) to our target state (Enterprise AI solution).
The 3 critical path items are:
Data location- where do we land the data?
Data management- do we need a data mesh (decentralized) vs lake house (centralized) architecture for a data management strategy?
Operational AI/ML capabilities- what is the operational model post product delivery?
As an example, for Critical Path item #1, let’s say your local AI solution is stuck on prem, and you manually load csv files into your local environment, with no central database to land your data. For the target state, you’d like to go to a cloud environment and possibly utilize a lake house, or data mesh.
For Critical Path item #2, once you decide on your data architecture, instead of managing your own local solutions, you build a dedicated DataOps team for this Cloud solution. Your Product Team can be the start of such DataOps team for the Enterprise.
For Critical Path item #3, you currently have no operational model, as no ML model is deployed in production at your Enterprise. So your Product team can focus on building well-defined ML model pipelines where you can build end to end AI solutions that are repeatable and scalable at the Enterprise. This is where you start your Enterprise on their MLOps journey, to be implemented in phases as AI matures. Once operationalized, your AI solution will incorporate real time monitoring and ability to retrain when accuracy degrades over time, in production.
Just remember to keep your Product Team small, and don’t increase the scope beyond these 3 Critical Path items. In a future article, I will detail what success looks like, and share what could fail, but more importantly, what you learn from these failures. That’s why you keep this Product team small and agile, ready to fail, learn, and pivot.
If you're looking for support from me, here are a few options:
Enterprise Data Science Consultancy: With my consult team comprised of a Senior Data Scientist, Senior ML Engineer, Senior Data Engineer, and Senior Cloud Engineer, we will help you architect and build your Enterprise Data Science platform, and transfer knowledge to your IT team to maintain and optimize it. We will also overlay an MLOps framework to manage the AI solutions you build on this platform. If you don’t have an MLOps team, we will help you build one. Please get in touch about this consultancy here
Coaching and Mentorship: I offer coaching and mentorship; book a coaching session here