cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Hardy
by New Contributor III
  • 10041 Views
  • 5 replies
  • 6 kudos

The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption

I am trying to connect to SQL through JDBC from databricks notebook. (Below is my notebook command)val df = spark.read.jdbc(jdbcUrl, "[MyTableName]", connectionProperties) println(df.schema)When I execute this command, with DBR 10.4 LTS it works fin...

  • 10041 Views
  • 5 replies
  • 6 kudos
Latest Reply
DBXC
Contributor
  • 6 kudos

Try to add the following parameters to your SQL connection string. It fixed my problem for 13.X and 12.X;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net; 

  • 6 kudos
4 More Replies
Clampazzo
by New Contributor II
  • 2752 Views
  • 3 replies
  • 0 kudos

Power BI RLS running extremely slowly with databricks

Hi Everyone,I am brand new to databricks and am setting up my first Semantic Model with RLS and have run into an unexpected problem.When I was testing my model with filters applied (where the RLS would handle later on) it runs extremely fast.  I look...

Data Engineering
Power BI
sql
  • 2752 Views
  • 3 replies
  • 0 kudos
Latest Reply
KTheJoker
Databricks Employee
  • 0 kudos

Are you trying to use Power BI RLS rules on top of DirectQuery? Can you give an example of the rules you're trying to apply? Are they static roles, or dynamic roles based on the user's UPN/email being in the dataset?

  • 0 kudos
2 More Replies
Mathias_Peters
by Contributor II
  • 2624 Views
  • 2 replies
  • 1 kudos

Resolved! DLT table not picked in python notebook

Hi, I am a bit stumped atm bc I cannot figure out how to get a DLT table definition picked up in a Python notebook. 1. I created a new notebook in python2. added the following code:  %python import dlt from pyspark.sql.functions import * @dlt.table(...

Mathias_Peters_0-1715334658498.png
  • 2624 Views
  • 2 replies
  • 1 kudos
Latest Reply
Mathias_Peters
Contributor II
  • 1 kudos

Ok, it seems that the default language of the notebook and the language of a particular cell can clash. If the default is set to Python, switching a cell to SQL won't work in DLT and vice versa. This is super unintuitive tbh.

  • 1 kudos
1 More Replies
Muralijv
by New Contributor
  • 2018 Views
  • 1 replies
  • 0 kudos

Databricks REST API

Hi , I am trying to create a Global Initscript using rest API as below successfully in the first step using Powershell. In the second step I am trying to enable it using rest api and getting the following error:  Any guidance or help is appreciated. ...

  • 2018 Views
  • 1 replies
  • 0 kudos
Latest Reply
feiyun0112
Honored Contributor
  • 0 kudos

This  API use PATH method, but you use POSTPATCH /api/2.0/workspace-confhttps://docs.databricks.com/api/workspace/workspaceconf/setstatus 

  • 0 kudos
Faisal
by Contributor
  • 8801 Views
  • 1 replies
  • 0 kudos

DLT SQL

What is best practise to implement parameterization in SQL DLT (specifically) pipelines so that it's easy and no manual intervention would be potentially required to migrate from dev_region to prod_region

  • 8801 Views
  • 1 replies
  • 0 kudos
Latest Reply
_databreaks
New Contributor II
  • 0 kudos

I would love to see a sample implementation of this config table.

  • 0 kudos
harraz
by New Contributor III
  • 10391 Views
  • 3 replies
  • 0 kudos

Unable to use unity catalog in notebook

com.databricks.backend.common.rpc.SparkDriverExceptions$SQLExecutionException: org.apache.spark.sql.connector.catalog.CatalogNotFoundException: Catalog 'uc-dev' plugin class not found: spark.sql.catalog.uc-dev is not defined ....I get the above when ...

  • 10391 Views
  • 3 replies
  • 0 kudos
Latest Reply
Tomas
New Contributor II
  • 0 kudos

I had the same error plugin class not found: spark.sql.catalog... is not defined immediately after attaching the workspace into Unity catalog.The error was resolved by restarting SQL Warehouse.It seems that if SQL Warehouse (or any cluster) is runnin...

  • 0 kudos
2 More Replies
Christine
by Contributor II
  • 11250 Views
  • 9 replies
  • 5 kudos

Resolved! pyspark dataframe empties after it has been saved to delta lake.

Hi, I am facing a problem that I hope to get some help to understand. I have created a function that is supposed to check if the input data already exist in a saved delta table and if not, it should create some calculations and append the new data to...

  • 11250 Views
  • 9 replies
  • 5 kudos
Latest Reply
SharathE
New Contributor III
  • 5 kudos

Hi,im also having similar issue ..does creating temp view and reading it again after saving to a table works?? /

  • 5 kudos
8 More Replies
SankaraiahNaray
by New Contributor II
  • 31587 Views
  • 10 replies
  • 5 kudos

Not able to read text file from local file path - Spark CSV reader

We are using Spark CSV reader to read the csv file to convert as DataFrame and we are running the job on yarn-client, its working fine in local mode. We are submitting the spark job in edge node. But when we place the file in local file path instead...

  • 31587 Views
  • 10 replies
  • 5 kudos
Latest Reply
AshleeBall
New Contributor II
  • 5 kudos

Thanks for your help. It helped me a lot.

  • 5 kudos
9 More Replies
Karene
by New Contributor
  • 1913 Views
  • 1 replies
  • 0 kudos

Databricks Connection to Redash

Hello,I am trying to connect my Redash account with Databricks so that my organization can run queries on the data in Unity Catalog from Redash.I followed through the steps in the documentation and managed to connect successfully. However, I am only ...

  • 1913 Views
  • 1 replies
  • 0 kudos
Latest Reply
JameDavi_51481
Contributor
  • 0 kudos

it looks like the Redash connector for Databricks is hard-coded to run `SHOW DATABASES`, which only shows `hive_metastore` by default. This probably needs to be updated to run `SHOW CATALOGS` and then `SHOW SCHEMAS in <catalog_name>` for each of thos...

  • 0 kudos
ipreston
by New Contributor III
  • 6598 Views
  • 6 replies
  • 0 kudos

Possible false positive warning on DLT pipeline

I have a DLT pipeline script that starts by extracting metadata on the tables it should generate from a delta table. Each record returned from the table should be a dlt table to generate, so I use .collect() to turn each row into a list and then iter...

  • 6598 Views
  • 6 replies
  • 0 kudos
Latest Reply
ipreston
New Contributor III
  • 0 kudos

Thanks for the reply. Based on that response though, it seems like the warning itself is a bug in the DLT implementation. Per the docs "However, you can include these functions outside of table or view function definitions because this code is run on...

  • 0 kudos
5 More Replies
NataliaCh
by New Contributor
  • 2039 Views
  • 0 replies
  • 0 kudos

Delta table cannot be reached with INTERNAL_ERROR

Hi all!I've been dropping and recreating delta tables at the new location. For one table something went wrong and now I cannot nor DROP nor recreate it. It is visible in catalog, however, when I click on the table I see message: [INTERNAL_ERROR] The ...

  • 2039 Views
  • 0 replies
  • 0 kudos
ashraf1395
by Honored Contributor
  • 1381 Views
  • 1 replies
  • 0 kudos

How to extend free trial period or enter free startup tier to complete our POC for a client.

We are a data consultancy. Our free trial period is currently getting over and we are still doing POC for one of our potential clients and focusing on providing expert services around databricks.1. Is there a possibility that we can extend the free t...

  • 1381 Views
  • 1 replies
  • 0 kudos
Latest Reply
Mo
Databricks Employee
  • 0 kudos

hey @ashraf1395, I suggest you contact your databricks representative or account manager.

  • 0 kudos
Mohit_m
by Valued Contributor II
  • 37606 Views
  • 3 replies
  • 4 kudos

Resolved! How to get the Job ID and Run ID and save into a database

We are having Databricks Job running with main class and JAR file in it. Our JAR file code base is in Scala. Now, when our job starts running, we need to log Job ID and Run ID into the database for future purpose. How can we achieve this?

  • 37606 Views
  • 3 replies
  • 4 kudos
Latest Reply
Bruno-Castro
New Contributor II
  • 4 kudos

That article is for members only, can we also specify here how to do it (for those that are not medium members?). Thanks!

  • 4 kudos
2 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels