A Practical Approach to Applying AI in the Enterprise: A Holistic View

Thar Htet
13 min readSep 3


Preface: Why This Article?

The increasing use of machine learning in business motivated me to expand my knowledge and skills so that I can stay relevant and employable. After spending many months learning about how to apply AI in business practically, I want to share some key insights I’ve gained so far from books and articles.

I’ve been reading up on stuff like MLOps, knowledge graphs, and connecting data to applications using machine learning models. The rise of things like ChatGPT has really expanded my thinking on how AI can be used day-to-day. Going through this learning journey over the past 4 months, I’ve connected some dots and put together a framework for how companies can become “AI-driven.”

I organized this article around the key ingredients organizations need to successfully adopt AI, based on the concepts I’ve studied. While I have some experience building apps, my hands-on skills with data and ML are still a work in progress. But I aimed to put together a useful big picture view from my learnings so far. This article shares my perspective on what is needed, though I know I need to keep learning.

From a top management view, I think people, processes and technology, especially people and culture, must be considered together when bringing in enterprise AI. My goal is to give business leaders a starting blueprint for their AI journey, even with my limited data/ML experience. With the right roadmap, companies can start transforming using the power of data and AI.

So.. what did I learn? In my learning,

🤔 💭 💡I realized successful adoption relies on an interconnected cycle between applications, data infrastructure, and ML models. Applications make data that can train ML models, which then power smarter applications.

Holistic View | App-Data-ML Cycle

Here’s a high-level diagram of what I’ve put together so far, leaving out the technical nitty-gritty inside each part:

Application-Data-ML cycle
A holistic view of Application-Data-ML cycle for AI Implementation for Enterprises. — Diagram by Author

While the overall cycle seem straightforward, companies struggle to create real business value from it. Without careful strategy and planning, efforts risk becoming disconnected from core goals.

I looked into key things to consider for aligning this application-data-ML approach to drive real impact. Let me explain the key elements I have noted and thought through:

Understanding the Platforms

The application platform refers to bringing together application software development, deployment and operations. It lets software engineers build and run applications, and businesses operate production apps for various areas. These include common domains like productivity, accounting, etc., along with specialized domains based on industry, like e-commerce, logistics, healthcare, and more. Application platform agility and innovation depends on DevOps practices like continuous integration/deployment and automated testing/delivery through pre-defined pipelines, all tied to business aims.

The data platform provides data storage, processing, and analytics capabilities for data engineers to manage data pipelines for machine learning. DataOps processes enable reliable data lifecycle management through pipelines for analytics and ML, with proper security and governance. For example, data must be carefully prepared to train models that address business needs.

The ML platform provides tools to build, train and deploy models. This lets data scientists analyze data and explore machine learning approaches. ML engineers create MLOps workflows to automate the end-to-end ML process — from new models to monitoring models in production. For instance, automatically retraining and redeploying when new data comes in.

These platforms and practices connect applications, data, and models into a pipeline. But it’s easier said than done, and many companies have trouble making this work smoothly. For example, while Netflix gathers viewer data from apps to train personalized recommendation models, many companies face challenges doing something similar.

The Role of Stewards 👩🏼‍✈️

In addition to technology-focused roles like engineers, there are critical people needed to govern the application-data-ML cycle and get business value from it:

These “stewards” are subject matter experts who work together across areas. They can clarify and identify if there a business problem is worthy of

  • Application stewards oversee systems in their business area, and they can clarify if the problem is ML worthy and validate if the applied ML intelligence solves the problem for business value.
  • Data stewards govern data practices, validate data quality and advise the sources, process and value of the collected data is of business value.
  • ML stewards drive model accountability and validate if a developed Model has specific biases to create real business value.

For instance, the HR systems steward collaborates with the payroll data steward and HR analytics model steward to improve workforce planning. Picking the right stewards, training them, and forming a governance team is critical.

Alignment to Business Goals Comes First 🥅 🎯

Considering the platforms and people involved, these investments are costly. Without linking efforts to primary business goals, they will have short lifespans.

A clear business vision and well-defined strategic objectives are crucial to guide investments, govern data practices, and evaluate progress in the application-data-ML cycle.

Folks in application, data, and ML domains need to collaborate closely to ensure their efforts accumulate towards overarching business aims. For example, an e-commerce company may have a strategic goal to improve conversion rate — the systems, data, and models should ultimately serve this objective.

Without alignment to strategy, teams risk working in silos and wasting resources on disjointed efforts that fail to create value. Governance and collaboration processes should be implemented to continuously verify alignment.

Getting the Data Right ✔🔍

Another important area is that ML models feed on data. But “garbage in, garbage out” applies to machine learning too. Thoughtful data collection, cleaning, labeling, and organization is required to train accurate and unbiased models. This represents a significant investment that must be strategic and purposeful.

Data collection should align with business goals first. Some companies gather piles of data without careful planning, hoping it may be useful someday. Others don’t know what data to collect at all. Beyond gathering data, it must be monitored for issues like bias, accuracy, and concept drift over time. Meaningless, messy, or biased data wastes resources and leads to misleading models.

For example, an HR system training a model on historically biased hiring data perpetuates unfair and problematic practices. The data does not reflect true requirements and may disadvantage certain groups. Or an auto insurer collecting irrelevant data like music preferences rather than useful driver behavior data for risk models.

Data strategy should stem from business needs to avoid amassing meaningless data that serves no purpose.

Training and Deploying Models ⚙️🤖

This is the final step of the application-data-ML cycle. In my opinion, the right ML approach depends a lot on the specific use case and quality of data. Teams should carefully research, validate, and compare approaches, rather than defaulting to trendy options like large language models.

Some common use cases like HR, accounting, productivity, etc. may leverage well-known models. But companies can gain competitive advantage from customized models trained on their own enterprise data for sector-specific needs.

Responsible deployment considers ethics, interpretability, communication to stakeholders, and performance on key metrics. For example, a finance company must ensure rigorous testing and explainability for an ML model making loan decisions, to avoid bias or unethical outcomes.

🤔 💭 With this cycle framework in mind, I considered some pratical example application usecases. The application could be in 2 approaches, quick win and long-term.

Application Approaches

🍌 Quick Win Example: Chatbot for IT Support

Lots of companies use chatbots to handle basic IT support questions. The ticketing app generates data on common issues like password resets or hardware requests. This data trains a natural language model to understand and resolve frequent requests. The chatbot integrates into the ticketing app to resolve straightforward cases, freeing up human agents for complex issues.

🏗️🗼Long-Term Example: Predictive Maintenance for Industrial Equipment

A manufacturer wants to predict equipment failures before they happen to minimize downtime. They use sensors on machines to collect real-time performance data. Historical maintenance data is also gathered. Data engineers clean and process the data to prepare it for ML. Data scientists research techniques to build predictive models. The models deploy into the monitoring app to alert on potential failures so preventive action can be taken.

🌟 The quick win reuses existing data for a common scenario. The long-term example involves more custom data and models for a complex business problem. But both demonstrate the core application-data-ML steps, where as Long-Term applications give a sustainable competitive advantage for the business.

Common Applications Across Industries

I also thought of common use cases that could apply across various industries:

Ecommerce Website: Personalized Recommendations

An online retailer wants to boost sales through tailored product recommendations. Their site gathers data on customer browsing and purchase history. This data is processed to understand individual interests. A recommendation model is trained to suggest relevant products to each customer. It integrates into the site to display personalized, real-time recommendations.

Mobile App for Distribution Agents: Dynamic Route Optimization

A distribution firm aims to improve delivery efficiency, and do more sales. Their mobile app tracks driver locations and transit times. Historical delivery data is analyzed to build a ML model that dynamically optimizes routes based on traffic, weather, and other factors. The route optimization model integrates into the driver app to provide turn-by-turn guidance for the most efficient route daily. Agents are recommended additional sales items based on historical records and existing trends upon arriving at distribution points.

Logistics Tracking: Estimated Time of Arrival Prediction

A logistics company wants to provide accurate delivery ETAs. Their system collects data on shipment locations and conditions from IoT sensors. Historical data trains a model to predict ETAs based on origin, destination, weather, traffic and other variables. The ETA model then integrates into the tracking system to proactively notify customers of potential delays.

A Visionary Example: The AI Executive Assistant

I imagined a radical, sci-fi level use case 😅— an AI assistant providing executives with data-driven decisions conversationally:

Jill is CEO of a Solar Appliances company. She relies on her AI assistant “Ava” for insights through natural conversation.

Jill asks: “How are we performing across regions?”

Ava taps into the company’s apps and data from:
— The ecommerce website data shows sales by region and customer segments.
— The distribution app provides logistics KPIs by geography.
— The support ticketing app gives customer sentiment insights.
Ava analyzes this data using its advanced LLM capabilities and summarizes;

“The West leads this quarter with 5% sales growth from solar customers. The Northeast is declining due to supply chain appliance issues. I recommend shifting marketing budget to capitalize on solar momentum out West and mitigate appliance declines in the Northeast. The South is steady. Logistics costs per package are improving everywhere as our route optimization AI scales. Support tickets are down 19% since expanding chatbot coverage, so customers are happier.”

Jill asks: “Can you show me sales data charts by product and region?”

Ava responds: “Certainly”
Then by accessing the ecommerce data API to pull the latest sales data segmented by region and product. It generates interactive charts illustrating the trends and shares them to Jill.

Jill reviews the charts and requests: “This is helpful. Can you get me the raw Northeast sales order data from last quarter?”

Ava replies: “No problem, retrieving the data now.”
Ava uses its sales database integration to query and extract the specified raw sales records directly. It compiles the data into a spreadsheet for Jill’s manual review.

This provides Jill a comprehensive overview for decision making. Ava delivered this insight by linking data across applications, analyzing it with AI/ML, and summarizing it conversationally — demonstrating the potential of various common and specialized ML models and Natural Language processing for data-driven executives.

🤔 💭 I have explored the important components of the full application-data-ML cycle, including key considerations for each part and example use cases ranging from quick wins to visionary concepts.

Next, I looked into how to actually begin implementing it to establish a successful AI-driven enterprise. I have broken down the implementations into simple phases that would sound logical and relevant for the executives and leadership.

Phased Approach for Implementation

Implementing an enterprise AI strategy requires careful planning and execution across multiple stages. Based on my learnings, here are the key phases I would do if I were to work on such a project.

1. Define the Vision and Goals

Conduct collaborative workshops with executives and stakeholders to align on business objectives and desired outcomes. This provides a guiding north star for decisions and measuring success. Define tangible goals that connect to business KPIs and value drivers, such as “reduce customer churn by 10%” or “decrease operational costs by optimizing logistics.” The goals should be specific, measurable, achievable, relevant and time-bound.

2. Establish Governance

Put in place formal data policies, designate data stewards, and define processes for metadata management, data quality, model risk reviews, and ethical AI practices. Strong governance ensures accountability and builds trust in AI initiatives. For example, it may require reviewing models for bias before deployment and auditing data lineage. Document policies and procedures clearly.

3. Build the Foundations

Implement the core platforms, infrastructure, and workflows needed to operationalize the AI cycle. This includes standardizing application platforms with DevOps practices, setting up scalable data pipelines and storage, configuring ML development environments, automating with MLOps, and other enabling components. The foundations should support agile experimentation as models need to adapt to changing real-world data.

4. Pursue Quick Win Use Cases

Conduct working sessions to identify high-impact pilot opportunities that can demonstrate early value and credibility for AI initiatives. Quick wins showcase the art of the possible and help gain buy-in for further investment. These might be an AI virtual assistant or predictive maintenance model. Start simple before pursuing complex use cases.

5. Operationalize and Scale

Take models successfully piloted into scalable production while expanding data sources and availability. This enables democratization through self-service access to data and models. Operationalization requires monitoring model performance, explaining model behaviors, and integrating models into business processes.

6. Enable Continuous Improvement

Leverage monitoring and feedback loops to continuously refine technical solutions, processes, and practices. This sustains outcomes over time through ongoing enhancement informed by usage data and metrics. Improvement might involve retraining models on new data or evolving data collection strategies based on model needs. Continually optimize.

Additional Considerable Factors

In addition to the core implementation phases, I also explored some supplementary areas that require consideration based on my learnings. As I read about enterprise AI strategies, several other key factors can make or break the success of an AI transformation. I noted this list of extra factors;

Enable Iteration and Feedback Loops

The application-data-ML cycle should not be viewed as one-directional. Build in continuous feedback loops where learnings from models inform improvements to data collection and applications. Changes to applications and data also shape the next iterations of models. Continual learning between components is critical.

Implement Robust Infrastructure and Monitoring

The cycle relies on scalable cloud infrastructure, containers, orchestration, and end-to-end monitoring across all platforms. This enables automation and observability of the entire pipeline. Monitor and measure to uncover optimization opportunities.

Use Hybrid Approaches

Most organizations will leverage a combination of pre-built and custom AI models based on their specific use case needs and data. Blend reusable public models with proprietary models tailored to the situation.

Foster a Supportive Culture and Manage Change

Cross-functional collaboration, agile culture, and change management are key for organizations adopting this cycle. This impacts processes and people across teams. Invest in training and align incentives to drive adoption.

Maintain Business Alignment

Continuously evaluate impact on business KPIs, not just model accuracy metrics. Models need to demonstrably improve desired business outcomes over time. Tie models directly to business value.

Prioritize Cybersecurity

With increased dependence on data and models, address cybersecurity threats like data poisoning, model theft, and adversarial attacks. Audit and secure each component, from data to models.

🤔 💭 After looking into the logical implementation phases, approaches and other important factors, I thought of diving into People’s area of the cycle. Let’s look into the people who participate and drive this Application-Data-ML cycle, and what would be of their skills, careers, and growth opportunities.

Critical Role, Required Skills, and Opportunities

These are the most essential technical roles and their functions to enable the application-data-ML cycle in my view:

Application Engineers collaborate with data scientists to integrate models into apps via APIs and microservices. They build apps that collect quality data for models and adopt MLOps for continuous deployment. Application engineers architect scalable infrastructure to power AI apps. Their skills in intelligent applications present career growth opportunities as AI adoption increases.

Data Engineers design data pipelines, warehouses, lakes and governance. They implement data ops for reliable management and transform raw data into formats usable for model training. Data engineers provide accessible APIs and self-service data access. Their data skills are becoming more crucial and valued as organizations amass more data.

Data Scientists analyze data, research modeling approaches, and train AI/ML models. They work with business leaders and application owners to identify opportunities to apply data science. Data scientists optimize models for accuracy and interpretability. Their specialized modeling knowledge presents many career advancement possibilities as AI adoption accelerates.

ML Engineers develop custom models where pre-built models fall short. They build MLOps pipelines for continuous model retraining, testing and deployment. ML engineers monitor models in production, enabling rapid iteration. Their expertise in operationalizing models is in high demand as more businesses seek to deploy AI.

Managers oversee technology teams, fostering collaboration to align efforts. They guide governance and change management. Managers evaluate progress on business metrics and continuously improve. Their leadership enables the application-data-ML cycle. Managers develop broad experience across AI disciplines, preparing them for executive roles.

Stewards are also important, and I have mentioned above.

🤔 💭After seeing the big picture, I thought about evaluating myself for learning, up-skilling and staying relevant, given the current advancement and progress.

Considering how do I fit in?

Starting off with some self-reflection, I have to admit I’m not an expert in data engineering or machine learning. But given my background in building smart apps and leading projects, I’m optimistic about diving into newer models like GPT, PaLM, LLAMA, and Claude.

I know I’ve got a lot to learn, especially with the technical details of data pipelines and model development. But my quick learning ability and the core skills I’ve honed can undoubtedly make that curve less steep.

Lastly, I take pride in being able to simplify complex ideas into actionable plans. As I continue to grow, my technical and strategic blend will guide me not only to understand but to apply what’s both technically possible and commercially sensible effectively.

And with a tool like ChatGPT, I might even have time to polish my joke delivery. Because, you know, generating code is good, but generating a smile? Priceless! Haha! 😆 I hope you can use this framework and similar line of thought for yourself too. I hope it is helpful.



Thar Htet

A Software Engineer turned Entrepreneur, running a Software Company in Myanmar serving Web, Mobile and Cloud solutions to Consumer, Businesses & Public.