Interviews

JFrog MLOps Solution: Overcoming Common Challenges in ML Model Management

Kavita Viswanath, GM & VP, APAC, JFrog. Bengaluru. August 2023. Photograph by Nishant Ratnakar

CXOToday has engaged in an exclusive interview with Kavita Viswanath, GM & VP APAC, JFrog

 

  1. The press release mentions that JFrog MLOps aims to streamline the end-to-end ML lifecycle. Can you elaborate on how this solution achieves this goal and its advantages to organizations looking to enhance their machine-learning workflows?

More organizations are building and leveraging ML/AI models for use in software applications, however ML Model development typically happens outside an organization’s existing software supply chain. Further, most of these organizations are leveraging existing models as they often lack the resources to build quality, sophisticated models (such as Large Language Models) from scratch, so they turn to model hubs to find the right open source model to meet their needs.

The use of open source models pose the same challenges as OSS packages – security, availability, version, etc.. The threat landscape for ML models is new and evolving, posing unique challenges for organizations.

Organizations already rely on JFrog to store, manage and secure proprietary and third party artifacts used in software development. By extending native support for ML models we empower organizations to create a single system of record for ML models that brings ML/AI development in line with their existing secure SSC and using a tool they already have in their tech stack.

 

2. Could you provide an in-depth overview of the JFrog MLOps solution and its specific strategies for addressing the common challenges that organizations encounter in managing and optimizing machine learning workflows?

Some of the most common challenges that DevOps teams face when moving ML models to production include cost, a lack of automation, a lack of proper expertise, and an inability to scale. Further, government regulations already require vendors to list exactly what lives inside a piece of software’s code, and it’s only a matter of time before those guidelines extend to AI and ML components. ML Model Management gives teams an easy way to store and manage models, inclusive of all their associated files, and alongside the other software components so they can easily overcome these challenges and meet management, security, and provenance requirements. The new ML Model Management capabilities provide visibility, governance, and security to ML engineers and data scientists.

Now, organizations can:

  • Proxy the popular public ML repository Hugging Face to cache open-source AI models companies rely on and from deletion or modification.
  • Scan ML model licenses to ensure compliance with company policies.
  • Detect and block the use of malicious ML models.
  • Store home-grown or internally augmented ML models with robust access controls and versioning history for greater transparency.
  • Bundle and distribute ML models as part of any immutable software release.

 

3. Given the critical importance of security and compliance in MLOps, can you shed light on how JFrog MLOps strengthens security measures and ensures compliance throughout the machine learning lifecycle?

JFrog is well known for helping teams store, manage, and secure all of their software artifacts and binaries in one place, creating a single source of truth, and we now are extending that to ML models – another type of binary. We’ve leveraged our binary expertise and applied it to ML model management. Just as JFrog provides a high level of security, compliance, and integrity when it comes to software artifacts, that’s now also the case with ML models — controls such as RBAC, versioning, license, and security scanning enable teams to work with ML models confidently because they know they’re both secure and compliant.

 

4. The JFrog Platform Updates introduce new security, DevOps, and MLOps features. Can you delve into the details of these enhancements and clarify how they contribute to the overall improvement of the software supply chain?

JFrog announced a number of capabilities which all contribute to improving the integrity of the software supply chain.

JFrog Curation shifts security “left of left” by blocking malicious and vulnerable packages from even entering the organization based on automated policies. It makes the process of bringing new OSS packages and libraries into an organization much more efficient and secure. Also announced as part of JFrog Curation is JFrog Catalog, which allows developers to explore OSS packages and discover their versions, vulnerabilities, license data, operational risk, and if they have any dependencies all from within the JFrog platform using the web UI or a powerful GraphQL API.

The other big security announcement outside JFrog Curation and scanning of AI/ML models  was the addition of Static Application Security Testing (SAST). SAST is now included as part of JFrog’s Advanced Security offering. JFrog’s SAST is accurate, fast (2KLOC scanned per second) and occurs at the local level, eliminating the indeed for code to leave the developer environment.

JFrog’s Release Lifecycle Management (RLM) was also debuted at SwampUP. RLM enhances JFrog’s role as the single source of truth for managing binary lifecycles by standardizing, centralizing, and providing orgs full control of the release process.Now organizations can create an immutable representation of a release early in the SDLC and manage it as a single entity as it matures towards release, tracking and capturing evidence of everything that happens along the way. This level of control and governance is particularly important for regulated industries.

Taken together all of these new capabilities help organizations continuously ensure that the components used in their releases are secure, quality, and have not changed at any time during as they move towards release.

 

5. As technology evolves rapidly, how does JFrog ensure that its solutions remain current and aligned with the latest industry trends and best practices?

A core element of the JFrog solution is its universality. The platform’s extensible, modular foundation provides the agility to meet the needs of today and tomorrow. This is evidenced by the many firsts we’ve achieved – the first Docker Registry, the first to offer multi-cloud and hybrid support, the first Swift registry, and now the first universal binary repository to support ML models. We continually evaluate the marketplace to identify and understand the trends impacting software development in order to provide the right solutions to our customers.

Further, we are investing heavily in research and developing comprehensive, DevOps-centric security solutions to address present and future threats. We automate DevSecOps processes uniquely at the binary level. Affirmed by our customers, this is the most effective approach to safeguarding their software supply chain. JFrog gives security at the binary level in every stage of the software development lifecycle to ensure applications are traceable, reliable, compliant, and secure. We continue to expand the breadth and depth of our software management, governance and security we provide to  companies to automatically and seamlessly stop unwanted or insecure libraries from ever entering the organization.

The industry is in a constant race against attackers, and JFrog consistently releases new capabilities that detect and address the threats posed by modern attack methods. By providing a comprehensive platform that is developer-friendly and enterprise-ready – with security and governance baked in at every phase  – we can better arm companies to innovate faster with peace of mind in knowing their software is safe for use both today and tomorrow.

 

6. For organizations interested in adopting JFrog MLOps and leveraging the new capabilities in the JFrog Platform Updates, what is the recommended approach to getting started, and what can they expect during the implementation process?

Existing SaaS customers can start using the ML Model Management capabilities right away. Setting up ML Model repositories and turning on security policies is very easy, following the same repository creation experience they’re already used to. And configuring your MLOps workflows to work with JFrog is a matter of a couple of lines of code provided by JFrog via the “Set Me Up”.

JFrog customers self-hosting their solution will see these capabilities available shortly. And non-JFrog customers see the new ML Model Management functionality in action via our trial experience, or by booking some time with our Solution Engineers.

 

7. Could you provide insights into JFrog’s plans and developments in MLOps and software supply chain security?

Expanding our security offering has obviously been a large focus for us recently. Now that we’ve comprehensively covered the “shift left” motion we’ll begin to look for additional “shift right” opportunities as well, giving our customers a complete end-to-end application security solution.

We’ll also continue to evaluate other areas in the MLOps workflow that make sense for JFrog to support for our customers.

And, like we’ve always done, we’ll continue to expand the number of technologies we support from a management and security perspective.

Leave a Response