cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

hukel
by Contributor
  • 3993 Views
  • 6 replies
  • 0 kudos

Unsupported datatype 'TimestampNTZType' with liquid clustering

I'm experimenting with liquid clustering and have some questions about compatible types  (somewhat similar to Liquid clustering with boolean columns ).Table created as CREATE TABLE IF NOT EXISTS <TABLE> ( _time DOUBLE , timestamp TIMESTAMP_NT...

  • 3993 Views
  • 6 replies
  • 0 kudos
Latest Reply
Wojciech_BUK
Valued Contributor III
  • 0 kudos

Hi,just educated guess:There is limitation in liquid clustering docs: You can only specify columns with statistics collected for clustering keysPerhaps it is related to data types for which you can collect statistics?But i could not find related docs...

  • 0 kudos
5 More Replies
AndyKeel
by New Contributor II
  • 1268 Views
  • 1 replies
  • 0 kudos

Creating an ADLS storage credential for an AWS Workspace

I'd like to create a storage credential for an Azure Storage Account in an AWS workspace. I then plan to use this storage credential to create an external volume.Is this possible, and if so what are the steps? Thanks for any help!

  • 1268 Views
  • 1 replies
  • 0 kudos
Latest Reply
AndyKeel
New Contributor II
  • 0 kudos

Thanks for your help.I'm struggling to create the Storage Credential. I have created a managed identity via an Azure Databricks Access Connector and am making an API call based on what I'm reading in the API docs: Create a storage credential | Storag...

  • 0 kudos
DH_Fable
by New Contributor II
  • 1255 Views
  • 0 replies
  • 0 kudos

Downloading multiple excel files at once from repo

I have a notebook that produces lots of excel files which I want downloading on my local machine.I can only currently download one by one which takes a long time when there are a lot of them.Is there a way without using Azure CLI to download all of t...

  • 1255 Views
  • 0 replies
  • 0 kudos
Pratibha
by New Contributor II
  • 2100 Views
  • 0 replies
  • 0 kudos

how max_retry_interval_millis works with retry_on_timeout in Data bricks.

 my project I want if job take longer time then it will terminate and again it will try even if there is timeout error and in databricks launched status should show retry by scheduler and it should follow min_retry_interval_millis before  start retry...

Data Engineering
min_retry_interval_millis
  • 2100 Views
  • 0 replies
  • 0 kudos
Pratibha
by New Contributor II
  • 3945 Views
  • 2 replies
  • 1 kudos

Want to set execution termination time/timeout limit for job in job config

Hi , I Want to set execution termination time/timeout limit for job in job config file. please help me how I can do this by pass some parameter in job config file. 

  • 3945 Views
  • 2 replies
  • 1 kudos
Latest Reply
RKNutalapati
Valued Contributor
  • 1 kudos

Hi @Pratibha You can configure optional duration thresholds for a job, including an expected completion time for the job and a maximum completion time for the job. To configure duration thresholds, click Set duration thresholds. If you are creating j...

  • 1 kudos
1 More Replies
vinaykumar
by New Contributor III
  • 6704 Views
  • 3 replies
  • 6 kudos

Reading Iceberg table present in S3 from databricks console using spark given none error .

Hi Team , I am facing issue while reading iceberg table from S3 and getting none error when read the data . below steps I followed .Added Iceberg Spark connector library to your Databricks cluster. 2. Cluster Configuration to Enable Iceberg ...

image image
  • 6704 Views
  • 3 replies
  • 6 kudos
Latest Reply
Ambesh
New Contributor III
  • 6 kudos

Hi @Retired_mod I am using Databricks Runtime 10.4 ( Spark 3.2 ), so I have downloaded “iceberg-spark-runtime-3.2_2.12”Also the table exists in the S3 bkt. The error msg is:  java.util.NoSuchElementException: None.getI am also attaching a screenshot ...

  • 6 kudos
2 More Replies
GCera
by New Contributor II
  • 2890 Views
  • 2 replies
  • 1 kudos

Can we use "Access Connector for Azure Databricks" to access Azure SQL Server?

Is it possible to avoid using Service Principal (and managing their secrets) via the Python MSAL library and, instead, use the "Access Connector for Azure Databricks" to access Azure SQL Server (just like we do for connecting to Azure Data Lake Stora...

  • 2890 Views
  • 2 replies
  • 1 kudos
Latest Reply
GCera
New Contributor II
  • 1 kudos

Unfortunately, I guess the answer is no (as for today, see @Wojciech_BUK reply).

  • 1 kudos
1 More Replies
Ruby8376
by Valued Contributor
  • 1635 Views
  • 2 replies
  • 1 kudos

Query endpoint on Azure sql or databricks?

Hi Currently all data reauired resides in Az sql database. We have a project in which we need to query on demand this data in Salesforce data cloud to be further used for reporting in CRMA dashboard.do we need to move this data from az sql to delta l...

  • 1635 Views
  • 2 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

It depends.  If Salesforce Data Cloud has a connector for AZ SQL (being a native one or odbc/jdbc), you can query directly.  MS also has something like OData.  AFAIK AZ SQL does not have a query API, only for DB-management purposes.If all above is no...

  • 1 kudos
1 More Replies
hv129
by New Contributor
  • 4828 Views
  • 0 replies
  • 0 kudos

java.lang.OutOfMemoryError on Data Ingestion and Storage Pipeline

I have around 25GBs of data in my Azure storage. I am performing data ingestion using Autoloader in databricks. Below are the steps I am performing:Setting the enableChangeDataFeed as true.Reading the complete raw data using readStream.Writing as del...

  • 4828 Views
  • 0 replies
  • 0 kudos
vroste
by New Contributor III
  • 12960 Views
  • 8 replies
  • 5 kudos

Resolved! Unsupported Azure Scheme: abfss

Using Databricks Runtime 12.0, when attempting to mount an Azure blob storage container, I'm getting the following exception:`IllegalArgumentException: Unsupported Azure Scheme: abfss` dbutils.fs.mount( source="abfss://container@my-storage-accoun...

  • 12960 Views
  • 8 replies
  • 5 kudos
Latest Reply
AdamRink
New Contributor III
  • 5 kudos

What configs did you tweak, having same issue?

  • 5 kudos
7 More Replies
NLearn
by New Contributor II
  • 877 Views
  • 1 replies
  • 0 kudos

How can I programmatically get my notebook default language?

I'm writing some code to perform regression testing which require notebook path and its default language. Based on default language it will perform further analysis. So how can I programmatically get my notebook default language and save in some vari...

  • 877 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

You can get the default language of a notebook using dbutils.notebook.get_notebook_language()  try this example: %pythonimport dbutilsdefault_language = dbutils.notebook.get_notebook_language()print(default_language)

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels