Hi,
It is absolutely acceptable. Here are some details that you may want to consider. I'd also think about GPU availability in your cloud and region and whether there is GPU available for you to deploy these models to. You should be able to easily test this by trying to spin up a Provisioned throughput model serving endpoint.
Viability of Deploying Meta LLaMA 3 Locally on Databricks
Meta LLaMA 3 is supported for deployment within Databricks via Mosaic AI Model Serving (Foundation Model APIs). You can run LLaMA 3 models fully within the Databricks infrastructure, allowing you to keep all email and PII data inside your company's secure environment—no required transfers outside Databricks or to external providers for inference.
Security controls in Databricks Model Serving include:
- AES-256 encryption at rest and TLS 1.2+ encryption in transit
- Logical isolation and governance via Unity Catalog
- No customer data used for model training or improvement of Databricks services
- Data residency controls—workspaces in the UK (and EU) process data within those boundaries
Meta LLaMA 3.3 70B Instruct and other LLaMA versions are explicitly listed as deployable options; you must comply with the Meta LLaMA Community License and Acceptable Use Policy when using these models.
Alternative Local-Deployable LLMs for Databricks
If you choose not to use LLaMA 3, other open-weight models suitable for private deployments within a Databricks workspace include:
- LLaMA 4 Maverick – Also available for deployment, subject to Meta’s licensing.
- Qwen3-Next-80B-A3B-Instruct (Apache 2.0 License) – Known for efficiency in instruction-following and enterprise contexts.
- OpenAI GPT OSS 20B and 120B – Both licensed under Apache 2.0, optimized for batch inference and can be fully governed locally.
- Google Gemma 3 12B – Licensed for commercial use under Google’s terms, multilingual, designed for text/image tasks.
- Mistral-7B – Common open source model suitable for many enterprise tasks.
These “foundation models” are available for local serving via Mosaic AI Model Serving, and do not require external API calls. Licensing compliance is your responsibility—verify restrictions in each model’s license before production use.