What is MLOps?

The term “MLOps” has become increasingly popular with the rise of artificial intelligence (AI), but many business leaders across industries have yet to learn what it is. Born at the intersection of DevOps, data engineering, and machine learning (ML), MLOps is a set of practices that deploy and maintain machine learning models in production reliably and efficiently.

MLOps increases quality and simplifies the management process, and automates the deployment of machine learning and deep learning models in large-scale production environments. It is an independent approach to the machine learning lifecycle, where it is applied throughout the lifecycle, including data gathering, model creation, deployment, and governance.

MLOps has been proven to deliver several benefits, which is why many organizations are now adopting it.

MLOps is key to the “modern AI stack,” which we will explore further in follow-up articles.


MLOps vs DevOps

If you are unfamiliar with MLOps, you might be familiar with DevOps, which is a bit more common in today’s technical vocabulary.

To distinguish these two processes from one another, let’s first define each:

  • DevOps is the combination of cultural philosophies, practices, and tools that increase an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.” [1]

  • MLOps is a set of practices for collaboration and communication between data scientists and operations professionals. Applying these practices increases the quality, simplifies the management process, and automates the deployment of Machine Learning and Deep Learning models in large-scale production environments. It’s easier to align models with business needs, as well as regulatory requirements.” [2]

Both of these processes aim to take a piece of software and place it in a repeatable and fault-tolerant workflow. The major difference is that in MLOps, the software also has a component of machine learning. In other words, DevOps aims to shorten a system’s development lifecycle and provide high software quality, and MLOps aims to automate and improve machine learning applications and workflows.

Benefits of MLOps

It is incredibly challenging to manage models in production. When an organization wishes to optimize the value of machine learning, the models must improve efficiency in business applications as they run in production. Through various MLOps practices, businesses can efficiently deploy, manage, monitor, and govern machine learning.

Organizations can leverage MLOps in many different ways:

  • Trust: MLOps helps build trust for managing machine learning by leading to the creation of a repeatable process through automation, testing, and validation. MLOps enhances the reliability and productivity of machine learning development.

  • Scaling: MLOps is key to scaling an organization’s machine learning applications. For example, Netflix developed an end-to-end workflow management tool in-house.

  • Data Usage: AI-driven businesses are built upon big data, and MLOps can change how they are managed. The process improves products with each iteration, which shortens production lifecycles and leads to deep insights.

  • Integration: MLOps practices aim to integrate and enhance the development cycle and the operations process by combining the expertise of both data science and operations teams.

  • Reducing Risk and Bias: Unreliable and inaccurate models can lead to a loss in consumer trust. Models often make poor predictions due to the gap between training data and complex real-world data, turning them into liabilities. MLOps helps reduce the risk and prevent various development biases.

  • Testing Phase Automation: MLOps can automate the testing phases in the machine learning lifecycle, such as prediction validation, data quality monitoring, and integration testing. An example is Nike, which automated its A/B testing and serving pipeline that could manage integration model executions.




Best Practices for MLOps

If your organization is considering implementing MLOps, several best practices should be considered. They can be broken down into four main aspects: collaborative, continuous, reproducible, and tested and monitored.

One of the critical factors of success for an MLOps system is a collaborativehybrid team that possesses a wide range of skills. These teams often include an MLOps engineer, a data scientist or ML engineer, a data engineer, and a DevOps engineer. Without a hybrid team, it becomes tough to accomplish all of the necessary MLOps goals. They can’t be completed by a single data scientist or data engineer alone.

The second aspect of a successful MLOps system is continuous machine learning pipelines. Often referred to as MLOps pipelines, data pipelines are sequences of actions that the systems apply to data between the destination and source. These pipelines are usually demonstrated in graph form, and there are various specialized tools for creating, managing and running them.

ML models can be notoriously difficult to run and manage reliability, and this is due to the constant demand for data transformation. Well-established data pipelines result in better machine learning operations management, run time visibility, scalability, and more. And since ML is considered a form of data transformation, data pipelines become ML pipelines by including specific ML steps.

MLOps is also seeing the implementation of one of the core concepts in DevOps: Continuous Integration (CI) and Continuous Delivery (CD). Similar to DevOps, CI/CD enables changes to be made more frequently by automating the development stages, which are different in ML compared to software development.

  • Continuous Integration (CI): In machine learning, CI means that every time a code or data is updated, the ML pipeline reruns. Each rerun can consist of training, testing, or generating new reports, which makes it easier to compare other versions in production. This process is carried out in a way where everything is versioned and reproducible, meaning the codebase can be shared across projects and teams.

  • Continuous Delivery (CD): In machine learning, CD concerns the practice of deploying every build to a production-like environment and performing automated integration and testing of the application before it gets deployed.

Efficient CI/CD pipelines in MLOps enable developers to implement code changes rapidly and automatically build, test, and deploy new software iterations to production. This iterative approach is seen throughout the entire lifecycle of ML and AI projects. CI/CD pipelines also accurately automate the software delivery process, which helps eliminate human inaccuracies that arise from repetitive manual testing and deployment.

The third critical best practice for MLOps is reproducibility, which can be achieved through consistent version tracking. While traditional software can define all behavior with versioning code, ML requires monitoring additional information like model versions, training data, and meta-information like training hyperparameters.

ML models are far from a “one-size-fits-all” system for businesses, which is why they require audit trails of the previous model’s dataset, the version of the code, the framework, packages, parameters, and libraries. These attributes help ensure reproducibility and support the concept of “data as code,” which refers to MLOps treating data models as versioned, reproducible, and portable.

The last critical practice of MLOps is tested and monitored, which usually involves integration and unit testing. New versions of a model need to pass these tests before being deployed. The tests are both automated and comprehensive, accelerating the speed of production deployments.

ML models are challenging to test because no model produces results that are 100% accurate 100% of the time. This means that statistical model validation tests must be relied on, and the team should select acceptable values and the correct tracking metrics.

A robust data pipeline requires the validation of input data, and common validations include column types, file formats, invalid values, and empty values. It’s not enough to track a single metric throughout the validation set. Instead, model validation should be carried out individually for relevant data segments, or else the model can fall victim to fairness and bias issues.

The performance of ML systems relies on more controllable factors like software and infrastructure, as well as less controllable factors, such as data. Because of this, model prediction performance and standard metrics should be monitored. The efficient performance of ML systems also heavily relies on monitoring production systems.

The monitoring of such systems can become complicated with the addition of new data, which is why statistical comparisons are often used for assessment. It’s also crucial to monitor not just the entire system but also individual metrics. For example, the percentage of positive classifications for a set period of time can be calculated, and the system can alert for any significant deviation.

The Case for MLOps

MLOps is crucial for any company looking to transform into an AI-driven organization, especially for complex enterprise systems. It enables companies to quickly deploy, monitor, and update models in production, ensuring AI generates value and doesn’t become a useless and costly experiment.

Today’s business needs require scalable, reliable, and efficient software. And the same holds for machine learning models that drive business decisions and generate value. MLOps improves the business model in many ways, such as accelerating time-to-value, optimizing team productivity through integrated workflows and role specialization, improving infrastructure management, and protecting business assets and continuity.

According to the Deloitte report entitled “MLOps: Industrialized AI,” MLOps represents the transition from the “era of artisanal AI” to the “application of engineering disciplines to automate ML model development, maintenance and delivery,” which is exactly what’s required to flourish in this highly competitive AI-driven environment.

Machine learning models help organizations discover new patterns, reveal anomalies, make accurate predictions and decisions, and generate deep insights. But despite the growing adoption of machine learning, many organizations are bogged down by clunky development and deployment processes that do little to foster collaboration between product teams, data scientists, and operational staff, which results in a significant portion of AI projects failing.

To raise their chances of success, businesses must integrate AI and machine learning into every process and system. And for that to happen, they must be able to deploy them consistently and at scale. Of course scale increase means complexity increase, from both a depth and breath perspective (scale increase is usually non-linear, which implies the combinatorial effects of scale easily get out of hand if not properly managed.) This broad transformation is exactly what MLOps can bring on. MLOps optimizes development, deployment, and management. It drives business value by expediting the experimentation process and development pipeline, improves model production quality, and makes it easier to maintain production models and manage regulatory requirements.

When a business possesses effective MLOps tools and practices, AI teams can better address challenges, and data scientists can focus on experimenting and innovating new technologies that expand on core techniques, enabling organizations to scale ML and become more operationally resilient in a time of great technological change.

I hope you enjoyed this piece on MLOps. Make sure to look out for the next installment of this series where I’ll demystify the “modern AI stack.”

Sources

[1] “What is DevOps?”

[2] “MLOps: What It Is, Why It Matters, and How to Implement It”

Follow on Twitter, LinkedIn, and Instagram for AI-related content.






Giancarlo Mori