cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Sudheer2
by New Contributor III
  • 430 Views
  • 0 replies
  • 0 kudos

How to Fetch Azure OpenAI api_version and engine Dynamically After Resource Creation via Python?

Hello,I am using Python to automate the creation of Azure OpenAI resources via the Azure Management API. I am successfully able to create the resource, but I need to dynamically fetch the following details after the resource is created:API Version (a...

  • 430 Views
  • 0 replies
  • 0 kudos
Avvar2022
by Contributor
  • 4936 Views
  • 8 replies
  • 3 kudos

Unity catalog enabled workspace -Is there any way to disable workflow/job creation for certain users

Currently in unity catalog enabled workspace users with "Workspace access" can create workflows/jobs, there is no access control available to restrict users from creating jobs/workflows.Use case: In production there is no need for users, data enginee...

  • 4936 Views
  • 8 replies
  • 3 kudos
Latest Reply
Avvar2022
Contributor
  • 3 kudos

@Lakshay Databricks offers a robust platform with a variety of features, including data ingestion, engineering, science, dashboards, and applications. However, I believe that some features, such as workflow/job creation, alerts, dashboards, and Genie...

  • 3 kudos
7 More Replies
SamGreene
by Contributor II
  • 1858 Views
  • 3 replies
  • 0 kudos

String to date conversion errors

Hi,I am getting data from CDC on SQL Server using Informatica which is writing parquet files to ADLS.  I read the parquet files using DLT and end up with the date data as a string such as this'20240603164746563' I couldn't get this to convert using m...

  • 1858 Views
  • 3 replies
  • 0 kudos
Latest Reply
SamGreene
Contributor II
  • 0 kudos

Checking on my current code, this is what I am using, which works for me because we don't use daylight savings time.  from_utc_timestamp(date_time_utc, 'UTC-7') as date_time_local

  • 0 kudos
2 More Replies
GeKo
by New Contributor III
  • 14082 Views
  • 5 replies
  • 0 kudos

Insufficient privileges:User does not have permission SELECT on any file

Hello,after switching to "shared cluster" usage a python job is failing with error message:  Py4JJavaError: An error occurred while calling o877.load. : org.apache.spark.SparkSecurityException: [INSUFFICIENT_PERMISSIONS] Insufficient privileges: User...

Community Platform Discussions
permissions
privileges
python
  • 14082 Views
  • 5 replies
  • 0 kudos
Latest Reply
Uj337
New Contributor III
  • 0 kudos

Hi @GeKo The checkpoint directory, is that set on cluster level or how do we set that ? Can you please help me with this ?

  • 0 kudos
4 More Replies
RobsonNLPT
by Contributor III
  • 1133 Views
  • 1 replies
  • 0 kudos

Databricks UC Data Lineage Official Limitations

Hi all.I have a huge data migration project using medallion architecture,  UC, notebooks and workflows . One of the relevant requirements we have is to capture all data dependencies (upstreams and downstreams) using data lineage. I've followed all re...

  • 1133 Views
  • 1 replies
  • 0 kudos
Latest Reply
MathieuDB
Databricks Employee
  • 0 kudos

Hello @RobsonNLPT , Yes SQL CTE are supported by the data lineage service. You can track table that were created using CTEs. Here is an example that demonstrate the feature. CREATE TABLE IF NOT EXISTS mpelletier.dbdemos.menu ( recipe_id INT, ...

  • 0 kudos
OlehSemeniuk
by New Contributor II
  • 508 Views
  • 3 replies
  • 1 kudos

Resolved! Ingesting and Transforming NetCDF Data in Delta Table on Databricks Cluster

Hi,I need to ingest and transform historical climate data into a Delta table. The data is stored in .nc format (NetCDF). To work with this format, specific C libraries for Python are required, along with particular versions of Python libraries (e.g.,...

  • 508 Views
  • 3 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Great, please let us know in case any assistance is needed

  • 1 kudos
2 More Replies
Brianhourigan
by New Contributor II
  • 802 Views
  • 5 replies
  • 0 kudos

Service Principal Access to Users Directory in Databricks - Creating Git Folders

I am trying to automate the creation of git folders in user workspace directories triggered by GitHub feature branch creation. When developers create feature branches in GitHub, we want a service principal to automatically create corresponding git fo...

  • 802 Views
  • 5 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @Brianhourigan, Can you please DIM your suggestions? I can add it to our internal AHA idea.

  • 0 kudos
4 More Replies
iptkrisna
by New Contributor III
  • 1018 Views
  • 5 replies
  • 0 kudos

Restore deleted databricks jobs and job runs

Hi All,Is there a way to restore deleted databricks jobs?Thank you.

Community Platform Discussions
Databricks
job-runs
Workflows
  • 1018 Views
  • 5 replies
  • 0 kudos
Latest Reply
hari-prasad
Valued Contributor II
  • 0 kudos

Hi @iptkrisna ,Currently, there is no option to recover deleted items. In architectures, it not necessary to control or manage the final code available in the system. Instead, the focus should be controlling and managing how code and jobs are deploye...

  • 0 kudos
4 More Replies
mrstevegross
by Contributor
  • 688 Views
  • 7 replies
  • 0 kudos

Resolved! Tutorial docs for running a job using serverless?

I'm exploring whether serverless (https://docs.databricks.com/en/jobs/run-serverless-jobs.html#create-a-job-using-serverless-compute) could be useful for our use case. I'd like to see an example of using serverless via the API. The docs say "To learn...

  • 688 Views
  • 7 replies
  • 0 kudos
Latest Reply
mrstevegross
Contributor
  • 0 kudos

Thanks!

  • 0 kudos
6 More Replies
mrstevegross
by Contributor
  • 695 Views
  • 6 replies
  • 0 kudos

preloaded_docker_images: how do they work?

At my org, when we start a databricks cluster, it oftens takes awhile to become available (due to (1) instance provisioning, (2) library loading, and (3) init script execution). I'm exploring whether an instance pool could be a viable strategy for im...

  • 695 Views
  • 6 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Sure, I will inform the team in charge of it to review it.

  • 0 kudos
5 More Replies
aonurdemir
by New Contributor III
  • 351 Views
  • 1 replies
  • 1 kudos

Resolved! Is there a cluster option for dashboards?

Hi everyone,I do not want to use 4 DBU/h XS warehouse since I have very tiny data on the new startup. I want to create a minimal cluster and run it as the underlying SQL engine for my dashboard.Thanks.

  • 351 Views
  • 1 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Unfortunately no, as dashboards are part of the SQL service on the platform they are designed to work with SQL warehouses only, you can create Notebook dashboards that will be able to work with regular clusters but functionalities will be limited in ...

  • 1 kudos
h2p5cq8
by New Contributor III
  • 651 Views
  • 5 replies
  • 1 kudos

Resolved! Databricks workflow with sequenced tasks

I have a continuous workflow. It is continuous because I would like it to run every minute and if it has stuff to do the first task will take several minutes. As I understand, continuous workflows won't requeue while a job is currently running, where...

  • 651 Views
  • 5 replies
  • 1 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 1 kudos

Hi @h2p5cq8, No problem! and you can have the queue option disabled to stop it. Go to the Advanced settings in the Job details side panel and toggle off the Queue option to prevent jobs from being queued

  • 1 kudos
4 More Replies
vicky403
by New Contributor
  • 697 Views
  • 1 replies
  • 0 kudos

How Development Target works for multiple users?

Hi, I'm using the Databricks asset bundle to deploy my job to Azure Databricks.I want to configure the Databricks bundle so that when anyone runs the Azure pipeline, a job is created under their name in the format dev_username_job.Using a personal ac...

  • 697 Views
  • 1 replies
  • 0 kudos
Latest Reply
zuzsad
New Contributor II
  • 0 kudos

Were you able to solve this?

  • 0 kudos
ahsan_aj
by Contributor II
  • 3382 Views
  • 5 replies
  • 0 kudos

Azure Databricks Enterprise Application User Impersonation Token Group Claims Issue

Hi all, I am using the Azure Databricks Microsoft Managed Enterprise Application scope (2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/user_impersonation) to fetch an access token on behalf of a user. The authentication process is successful; however, the acce...

  • 3382 Views
  • 5 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @ahsan_aj, You can modify your token request by adding a claims parameter     const claimsRequest = {         "access_token": {             "groups": null         } https://learn.microsoft.com/en-us/security/zero-trust/develop/configure-tokens-gro...

  • 0 kudos
4 More Replies
JohnsonBDSouza
by New Contributor II
  • 3910 Views
  • 2 replies
  • 0 kudos

Unable to create Iceberg tables pointing to data in S3 and run queries against the tables.

I need to to set up Iceberg tables in Databricks environment, but the data resides in an S3 bucket. Then read these tables by running SQL queries.Databricks environment has access to S3. This is done bysetting up the access by mapping the Instance Pr...

JohnsonBDSouza_0-1705982713662.png JohnsonBDSouza_1-1705982713665.png JohnsonBDSouza_2-1705982713667.jpeg JohnsonBDSouza_3-1705982713676.png
  • 3910 Views
  • 2 replies
  • 0 kudos
Latest Reply
Venkat5
New Contributor II
  • 0 kudos

It looks like Databricks making things difficult to use iceberg tables. There is no clear online documentation or steps provided to use with plain spark & spark sql,  and the errors thrown in the Databricks environment are very cryptic.They wanted to...

  • 0 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors