cancel
Showing results for 
Search instead for 
Did you mean: 
What's New in Databricks
cancel
Showing results for 
Search instead for 
Did you mean: 
lara_rachidi
Databricks Employee
Databricks Employee

In a nutshell — it’s all about Compound AI Systems

Watch this video for a deep dive into all the GenAI and ML announcements, and read the newsletter below for more details!

Here are the main takeaways in the Machine Learning and GenAI space:

  • Mosaic AI: Build and deploy production-quality Compound AI Systems: The evolution of monolithic AI models to compound systems is an active area of both academic and industry research.
  • Watch the Data+AI summit keynote recording to get an overview on how to build production-quality AI systems.
  • Watch this video on how to optimize LLM pipelines with DSPy and to learn more about compound systems.

Mosaic AI Model Training

  • Mosaic AI Model Training (fka. “Finetuning” and “Foundation Model Training”) is in public preview: it allows you to fine-tune open source foundation models with your private data, giving it new knowledge that is specific to a particular domain or task. Once the model is trained, you own the weights and the data, and we make it easy to serve through Provisioned Throughput by automatically registering it to your Unity Catalog. With this release, we have expanded availability to most US regions in AWS and Azure. It supports both supervised fine-tuning and continued pretraining on a list of models.
  • Watch the demo video below, a step-by-step tutorial on how to use Mosaic AI Model Training and this video on the benefits of fine-tuning an LLM.
  • Download the demo notebooks to get started with fine-tuning your LLM on Databricks
lara_rachidi_11-1720711047803.jpeg

 

Mosaic AI Agent Framework

  • Mosaic AI Agent Framework is in public preview (see documentation😞 It’s a set of tools on Databricks designed to help developers build, deploy, and evaluate production-quality agents. This framework allows you to build an AI system that is safely governed and managed in Unity Catalog. Here is how you can build an agent:
  • Create and log agents using any library and MLflow. Parameterize your agents to experiment and iterate on agent development quickly. You can set up configuration files that let you change code parameters in a traceable way without having to modify the actual code.
  • Deploy agents to production with native support for token streaming and request/response logging, plus a built-in review app to get user feedback for your agent. You can deploy agents either by using Model Serving or using the deploy() API from databricks.agents.
  • Agent tracing lets you log, analyze, and compare traces across your agent code to debug and understand how your agent responds to requests. You can add traces to your agents using the Fluent and MLflowClient APIs made available with MLflow Tracing.
  • Download the demo notebooks to start building a RAG app with Mosaic AI Agent Framework and Agent Evaluation, Model Serving, and Vector Search
  • Watch this end-to-end demo video on how to log, deploy, and debug agents (with demo!)

Foundation Model API

  • Foundation Model API generally available: Foundation models are accessible as pay-per-token as well as provisioned throughput for production workloads.

Mosaic AI Vector Search

  • Mosaic AI Vector Search now supports Customer Managed Keys and Hybrid Search (GA): Databricks Vector Search is now generally available (see the blog post and documentation). New capabilities were added: PrivateLink and IP access lists are now supported. Customer Managed Keys (CMK) are also now supported on endpoints created on or after May 8, 2024. Vector Search support for CMK is in Public Preview. You can now save generated embeddings as a Delta table (see Create a vector search index). Additionally, Vector Search now supports GTE-large embedding model, which has good retrieval performance and supports 8K context window. It also includes improved audit logs and cost attribution tracking.
  • Watch this video (including demo) for a deep dive into Vector search

Mosaic AI Tool Catalog and Function-Calling

  • Mosaic AI Tool Catalog and Function-Calling is in public preview: Mosaic AI Tool Catalog allows you to create an enterprise registry of common functions, internal or external, and share these tools across your organization for use in AI applications. Tools can be SQL functions, Python functions, model endpoints, remote functions, or retrievers. These functions can define tasks or tools within compound AI systems. We’ve also enhanced Model Serving to natively support function-calling, so that you can use popular open source models like Llama 3–70B as your agent’s reasoning engine.
  • Check the documentation here and here to get started using function calling.
  • Watch the demo from the Data+AI summit showcasing this capability
  • Download this demo notebook
lara_rachidi_12-1720711047804.jpeg

 

Shutterstock ImageAI

Mosaic AI Agent Evaluation

  • Mosaic AI Agent Evaluation for Automated and Human Assessments is in public preview: it is an AI-assisted evaluation tool that automatically determines if outputs are high-quality and provides an intuitive UI to get feedback from human stakeholders. Agent Evaluation lets you define what high-quality answers look like for your AI system by providing “golden” examples of successful interactions. You can explore permutations of the system, tuning models, changing retrieval, or adding tools, and understand how system changes alter quality. Agent Evaluation also lets you invite subject matter experts across your organization — even those without Databricks accounts — to review and label your AI system output to do production quality assessments and build up an extended evaluation dataset. Finally, system-provided LLM judges can further scale the collection of evaluation data by grading responses on common criteria such as accuracy or helpfulness. Detailed production traces can help diagnose low-quality responses.
  • Watch the demo from the Data+AI summit showcasing this capability.
  • This feature is also explained in the end-to-end demo video mentioned above on how to log, deploy, and debug agents.
  • Documentation available here.
lara_rachidi_13-1720711047806.png

 

MLflow 2.14

  • MLflow 2.14 is GA: MLflow is a model-agnostic framework for evaluating LLMs and AI systems, allowing you to measure and track parameters at each step. With MLflow 2.14, we released MLflow Tracing. This new feature allows developers to record each step of model and agent inference in order to debug performance issues and build evaluation datasets to test future improvements. Tracing is tightly integrated with Databricks MLflow Experiments, Databricks Notebooks, and Databricks Inference Tables, providing performance insights from development through production.
  • Watch the demo from the Data+AI summit showcasing this capability.
  • Documentation available here.
  • Want to know more about Deep Learning with MLflow? Watch this video.
  • This feature is also explained in the end-to-end demo video mentioned above on how to log, deploy, and debug agents.

Mosaic AI Gateway

  • Mosaic AI Gateway (fka. “External Models”) provides a unified interface to query, manage, and deploy any open source or proprietary model, enabling customers to easily switch the large language models (LLMs) that power their applications without needing to make complicated changes to the application code. It sits on Model Serving to enable rate limiting, permissions, and credential management for model APIs (external or internal). It also provides a single interface for querying foundation model APIs so that you can easily swap out models in their systems and do rapid experimentation to find the best model for a use case. Gateway Usage Tracking tracks who calls each model API and Inference Tables capture what data was sent in and out. This allows platform teams to understand how to change rate limits, implement chargebacks, and audit for data leakage.
  • Documentation on how to get started here.
  • More features on the roadmap… Stay tuned!
  • Mosaic AI Guardrails is in private preview: It allows you to add endpoint-level or request-level safety filtering to prevent unsafe responses, or even add PII detection filters to prevent sensitive data leakage. AI Guardrails is expected to be available in public preview in the coming months. In the meantime, it’s possible to enable safety filters in the Playground settings, as shown below.
lara_rachidi_14-1720711047805.png

 

system.ai Catalog

  • system.ai Catalog is a curated list of state-of-the-art open source models that is managed by Databricks in system.ai in Unity Catalog. You can easily deploy these models using Model Serving Foundation Model APIs or fine-tune them with Model Training. You can also find all supported models on the Mosaic AI Homepage by going to Settings > Developer > Personalized Homepage.

Follow us on Linkedin: Quentin & Youssef & Lara & Maria & Beatrice