cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
Databricks Clean Rooms: Now Generally Available on AWS and Azure

We’re thrilled to announce the General Availability (GA) of Databricks Clean Rooms on AWS and Azure, a significant step forward in enabling secure, privacy-centric data collaboration. Following the Public Preview launch in August, we've engaged close...

  • 110 Views
  • 0 replies
  • 1 kudos
Wednesday
Welcoming BladeBridge to Databricks: Accelerating Data Warehouse Migrations to Lakehouse

Databricks welcomes BladeBridge, a proven provider of AI-powered migration solutions for enterprise data warehouses. Together, Databricks and BladeBridge will help enterprises accelerate the work required to migrate legacy data warehouses like Oracle...

  • 124 Views
  • 0 replies
  • 0 kudos
Wednesday
Serverless Compute for Notebooks, Workflows and Pipelines is now Generally Available on Google Cloud

In the rapidly evolving landscape of data engineering and analytics, speed, scalability, and simplicity are invaluable. Serverless compute addresses these needs by eliminating the complexity of managing infrastructure, allowing you to focus on buildi...

  • 138 Views
  • 0 replies
  • 0 kudos
Wednesday
Securely share data, analytics and AI

Reduce the cost of sharing across platforms Gartner predicts that CDOs who have successfully executed data sharing initiatives are 1.7 times more effective in showing business value and ROI from their data analytics strategy. Databricks provides an ...

  • 521 Views
  • 0 replies
  • 3 kudos
2 weeks ago
Check Out the Latest Videos on DatabricksTV

We are thrilled to introduce DatabricksTV – your go-to hub for all things Databricks! DatabricksTV is packed with insightful videos from a diverse range of creators, offering you the latest tips, tutorials, and deep dives into the Databricks Platform...

  • 1822 Views
  • 0 replies
  • 3 kudos
07-30-2024
Data Intelligence for Data Engineers

Join us to find out how a platform built on lakehouse architecture and enhanced with built-in data intelligence automates many of the tasks that bog down engineers. You’ll discover how the Databricks Data Intelligence Platform helps you build secure...

  • 1030 Views
  • 0 replies
  • 3 kudos
2 weeks ago

Community Activity

sachin_kanchan
by > Visitor
  • 56 Views
  • 4 replies
  • 0 kudos

Unable to log in into Community Edition

So I just registered for the Databricks Community Edition. And received an email for verification.When I click the link, I'm redirected to this website (image attached) where I am asked to input email. And when I do that, it sends me a verification c...

db_fail.png
  • 56 Views
  • 4 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Lets open a case with databricks-community@databricks.com

  • 0 kudos
3 More Replies
SteveC527
by > New Contributor
  • 447 Views
  • 3 replies
  • 0 kudos

Medallion Architecture and Databricks Assistant

I am in the process of rebuilding the data lake at my current company with databricks and I'm struggling to find comprehensive best practices for naming conventions and structuring medallion architecture to work optimally with the Databricks assistan...

  • 447 Views
  • 3 replies
  • 0 kudos
Latest Reply
dataBuilder
  • 0 kudos

Hello!I am in a similar position and the medallion architecture makes a lot of sense to me (indeed, I believe we've been following a version of that ourselves for a long time).It seems to me having separate catalogs for each layer (bronze/silver/gold...

  • 0 kudos
2 More Replies
jeremy98
by > Contributor
  • 2 Views
  • 0 replies
  • 0 kudos

how read through jdbc from postgres to databricks a particular data type

Hi Community,I need to load data from PostgreSQL into Databricks through JDBC without changing the data type of a VARCHAR[]column in PostgreSQL, which should remain as an array of strings in Databricks.Previously, I used psycopg2, and it worked, but ...

  • 2 Views
  • 0 replies
  • 0 kudos
MohanaBasak
by Databricks Employee
  • 374 Views
  • 3 replies
  • 4 kudos

Unlocking Cost and Performance Insights in Databricks with Custom Dashboards

As organizations continue to scale their data infrastructure, efficient resource utilization, cost control, and operational transparency are paramount for success. With the growing adoption of Databricks, monitoring and optimizing compute usage and d...

Screenshot 2024-12-15 at 5.20.05 PM.png Screenshot 2024-12-15 at 4.56.02 PM.png Screenshot 2024-12-15 at 5.11.08 PM.png Screenshot 2024-12-15 at 5.06.35 PM.png
  • 374 Views
  • 3 replies
  • 4 kudos
Latest Reply
Mantsama4
Contributor
  • 4 kudos

Thank you Mohana for sharing the detail, really appreciate it.

  • 4 kudos
2 More Replies
busuu
by > New Contributor
  • 121 Views
  • 3 replies
  • 1 kudos

Failed to checkout Git repository: RESOURCE_DOES_NOT_EXIST: Attempted to move non-existing node

I'm having issues with checking out Git repo in Workflows. Databricks can access files from commit `a` but fails to checkout the branch when attempting to access commit `b`. The error occurs specifically when trying to checkout commit `b`, and Databr...

busuu_0-1738776211583.png
  • 121 Views
  • 3 replies
  • 1 kudos
Latest Reply
Augustus
New Contributor II
  • 1 kudos

I didn't do anything to fix it. Databricks support did something to my workspace to fix the issue. 

  • 1 kudos
2 More Replies
MarkV
by > New Contributor III
  • 475 Views
  • 5 replies
  • 0 kudos

DLT, Automatic Schema Evolution and Type Widening

I'm attempting to run a DLT pipeline that uses automatic schema evolution against tables that have type widening enabled.I have code in this notebook that is a list of tables to create/update along with the schema for those tables. This list and spar...

  • 475 Views
  • 5 replies
  • 0 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 0 kudos

Alternatively, you can try using the INSERT INTO statement directly: def load_snapshot_tables(source_system_name, source_schema_name, table_name, spark_schema, select_expression): snapshot_load_df = ( spark.readStream .format("clou...

  • 0 kudos
4 More Replies
ohnomydata
by > Visitor
  • 18 Views
  • 1 replies
  • 0 kudos

Accidentally deleted files via API

Hello,I’m hoping you might be able to help me.I have accidentally deleted some Workspace files via API (an Azure DevOps code deployment pipeline). I can’t see the files in my Trash folder – are they gone forever, or is it possible to recover them on ...

  • 18 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hello @ohnomydata, Unfortunately files deleted via APIs or the Databricks CLI are permanently deleted and do not move to the Trash folder. The Trash folder is a UI-only feature, and items deleted through the UI can be recovered from the Trash within ...

  • 0 kudos
Somia
by > New Contributor
  • 87 Views
  • 6 replies
  • 2 kudos

Resolved! sql query is not returning _sqldf.

Notebooks in my workspace are not returning _sqldf when a sql query is run. If I run this code, it would give an error in second cell that _sqldf is not defined.First Cell:%sqlselect * from some_table limit 10Second Cell:%sqlselect * from _sqldfHowev...

  • 87 Views
  • 6 replies
  • 2 kudos
Latest Reply
Somia
New Contributor
  • 2 kudos

Changing the notebook to default python and all purpose compute have fixed the issue. I am able to access _sqldf in subsequent sql or python cell.

  • 2 kudos
5 More Replies
pradeepvatsvk
by > New Contributor II
  • 160 Views
  • 2 replies
  • 0 kudos

polars to natively read and write through adls

HI Everyone,Is there a way polars can directly read files from ADLS  through abfss protocol.

  • 160 Views
  • 2 replies
  • 0 kudos
Latest Reply
jennifer986bloc
New Contributor II
  • 0 kudos

@pradeepvatsvk wrotae:HI Everyone,Is there a way polars can directly read files from ADLS  through abfss protocol.Hello @pradeepvatsvk,Yes, Polars can directly read files from Azure Data Lake Storage (ADLS) using the ABFS (Azure Blob Filesystem) prot...

  • 0 kudos
1 More Replies
hiryucodes
by > New Contributor
  • 86 Views
  • 1 replies
  • 0 kudos

ModuleNotFound when running DLT pipeline

My new DLT pipeline gives me a ModuleNotFound error when I try to request data from an API. For some more context, I develop in my local IDE and then deploy to databricks using asset bundles. The pipeline runs fine if I try to write a static datafram...

  • 86 Views
  • 1 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @hiryucodes, Ensure that the directory structure of your project is correctly set up. The module 'src' should be in a directory that is part of the Python path. For example, if your module is in a directory named 'src', the directory structure sho...

  • 0 kudos
Rafael-Sousa
by > Contributor II
  • 51 Views
  • 3 replies
  • 0 kudos

Managed Delta Table corrupted

Hey guys,Recently, we have add some properties to our delta table and after that, the table shows error and we cannot do anything. The error is that: (java.util.NoSuchElementException) key not found: spark.sql.statistics.totalSizeI think maybe this i...

  • 51 Views
  • 3 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @Rafael-Sousa, Could you please raise a support case for this, to investigate this further? help@databricks.com

  • 0 kudos
2 More Replies
samtech
by > New Contributor
  • 31 Views
  • 1 replies
  • 1 kudos

DAB multiple workspaces

Hi,We have 3 regional workspaces. Assume that we keep seperate folder for notebook say amer/xx , apac/xx, emea/xx and sepeate job/pipeline configrations for each region in git how to make sure during deploy appropriate job/pipleines are deployed in r...

  • 31 Views
  • 1 replies
  • 1 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 1 kudos

Hi @samtech, Define separate bundle configuration files for each region. These configuration files will specify the resources (notebooks, jobs, pipelines) and their respective paths. For example, you can have amer_bundle.yml, apac_bundle.yml, and eme...

  • 1 kudos
Phani1
by > Valued Contributor II
  • 13 Views
  • 1 replies
  • 0 kudos

Databricks On-Premises or in Private Cloud

Hi All,Is it possible to store/process the data on-premises or in a private cloud with Databricks? Will this choice affect costs and performance? Please advise, as the customer wants the data stored on-premises or in a private cloud for security reas...

  • 13 Views
  • 1 replies
  • 0 kudos
Latest Reply
TakuyaOmi
Valued Contributor II
  • 0 kudos

@Phani1 Databricks does not provide a product that can be directly installed and self-managed on on-premises or private cloud environments. Instead, Databricks primarily operates as a managed service on public cloud platforms such as AWS, Azure, and ...

  • 0 kudos
cpayne_vax
by > New Contributor III
  • 16681 Views
  • 12 replies
  • 9 kudos

Resolved! Delta Live Tables: dynamic schema

Does anyone know if there's a way to specify an alternate Unity schema in a DLT workflow using the @Dlt.table syntax? In my case, I’m looping through folders in Azure datalake storage to ingest data. I’d like those folders to get created in different...

  • 16681 Views
  • 12 replies
  • 9 kudos
Latest Reply
kuldeep-in
Databricks Employee
  • 9 kudos

@user1234567899 Make sure to enable DPM from Previews page. Once enabled you should be able to use schema name in DLT.

  • 9 kudos
11 More Replies
Phani1
by > Valued Contributor II
  • 9 Views
  • 1 replies
  • 0 kudos

Multiple metastores in Unity Catalog

Hi All,Can we have more than one meta store in a region in Unity Catalog? Having a single meta store per region helps keep metadata management organized, but customers are asking for multiple meta stores for different needs. Is it possible to have se...

  • 9 Views
  • 1 replies
  • 0 kudos
Latest Reply
TakuyaOmi
Valued Contributor II
  • 0 kudos

@Phani1 When attempting to create multiple metastores in a specific region, you may encounter the following error: This region already contains a metastore. Only a single metastore per region is allowed.Databricks recommends having only one metastore...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors
Read Databricks Data Intelligence Platform reviews on G2

Latest from our Blog