**The Clear Model Serving Engine: A Hub for Data Science**
When using the kf serving um uh engine instead of the triton bare metal engine, another layer is needed, but this is still built into the clear model serving offering great so we talked a little bit about the data science persona and and we transitioned into uh devops.
**The Collaboration Model: A Central Hub for Data Science**
So, let's dive into the collaboration model right, so the idea is that um clear mill is kind of the hub of everything, you can think of it as in a way what uh the version, the software version control is for ci cd in traditional software development. This becomes the hub of everything when it comes to data science because, as opposed to traditional software development, where the version control actually stores everything that we need. When we think about data science, we need to not only connect with the code but also with the data and the parameters of the specific model training inference etc, and that's exactly where this kind of experiment management solution as a hub comes into play.
**Everything is Visible to Everyone**
Everywhere is visible to everyone, so the developers actually have an overview of what's being served and used in production. The ml engineers have the ability to constantly moderate what's going on in the current development and insert that into the next generation or next versions of the pipelines. And the devops, they have full visibility on the resources that are used either in development itself because nowadays development is actually quite resource heavy training models testing them etc but also in production in terms of the resources and utilization that are used in model inference and usage usage itself.
**Visibility and Control**
With all the different metrics and use cases and usage available to everyone, it's possible to create even better visibility for your own organization. This is achieved through a full rest api and pythonic api, so you can actually build your own applications on top of it with rest api, especially when it comes to creating your own dashboards and integrating with the system in terms of automation etc.
**Rest API: A Key Integration Point**
Rest api becomes very easy, basically you have a specific job you clone it and you send it for execution and you have the entire pipeline triggered. And you can always monitor it, but if we think about more of a development um uh integration scenario then currently it's mostly python.
**Python as a Base Level Requirement**
Yes, I guess it kind of rules this entire field so there is a full rest api and you can can connect to it so on application level you can definitely build your own applications on top of it with rest api especially when it comes to creating your own dashboards and integrating with the system in terms of automation etc.
**Integrations: Beyond Python**
For example, things like triggering a pipeline from your own web application so rest api becomes very easy. But if we think about more of a development um uh integration scenario then currently um it's mostly python basically so yeah got it got it uh
**Customer Logos and Use Cases**
We've got some uh customer logos here are there any particular use cases or verticals that uh you you know either are seeing particular traction in or you're seeing a particularly strong fit in right um. I think that the first adoption was from customers with um needs in deep learning mostly deep learning because of the complexity of spinning gpu instances managing them introducing orchestration scheduling and priority on top of it.
**Breadening into Traditional Machine Learning**
But these days we actually uh see uh this um broadening into more traditional machine learning scenarios where even in machine learning you want to have that flexibility of some on gpu some on cpu things are getting more complicated meaning uh it becomes out it kind of overflows the scope of traditional jupiter notebooks and you have to have more automation and pipelines and we see more of those coming into our platform looking for solutions that are really can encapsulate really the end-to-end from storing the data itself.
**Automation and Pipeline Management**
That's where the automation and pipeline management come in, recently introduced into the system. Uh it's it's usually jupiter notebooks and pipelines built on top of them which are uh transparent in the system so even jupiter notebooks you don't actually have to store the notebook itself by the introducing those two lines of code the jubilee notebook is converted into code.
**Time-Saving for Traditional Machine Learning Companies**
So, this really saves you the time of managing the jupyter notebooks inside your git repository which uh is a huge value for for traditional machine learning companies.