cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

ChristianRRL
by Valued Contributor III
  • 8584 Views
  • 3 replies
  • 3 kudos

Resolved! DLT Job Clusters: Continuous vs Triggered Cluster Start Times

Hi there,I'm curious if anyone is able to definitively help me answer how DLT Job Clusters operate/run.For example, the following is my baseline understanding of DLT Job Clusters. If I run a Triggered DLT Pipeline (e.g. daily) the job cluster takes m...

  • 8584 Views
  • 3 replies
  • 3 kudos
Latest Reply
melbourne
Contributor
  • 3 kudos

Ideally one would expect clusters used for DLT pipeline to terminate after the pipeline execution has finished. However, while running in `development` environment, you'll notice it doesn't terminate on its own, whereas in `production` it terminates ...

  • 3 kudos
2 More Replies
al2co33
by New Contributor
  • 2318 Views
  • 1 replies
  • 0 kudos

Can I update a table comment using REST API?

https://docs.databricks.com/api/workspace/tablesIt seems I could only list/delete tables, is there a way to update a table's metadata like comment or detail fields by REST API?

  • 2318 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 0 kudos

Hi @al2co33 , We don't currently provide any APIs for updating table comments, however you can utilize the SQL Statement Execution API to do it. You can use the following tutorial to ALTER TABLE/COLUMN COMMENT. https://learn.microsoft.com/en-us/azure...

  • 0 kudos
Databricks_Java
by New Contributor
  • 3507 Views
  • 1 replies
  • 0 kudos

Databricks Java - Create Jar in Java 11

I am trying to a run simple print java program which is not working and getting compilation version issues though i changed the environment variable points to java 11. Can you please help me ? Can we create java with spark session and execute as a ja...

Get Started Discussions
Databricks
env
jar
java
spark
  • 3507 Views
  • 1 replies
  • 0 kudos
Latest Reply
arpit
Databricks Employee
  • 0 kudos

@Databricks_Java You can run command like this: spark-submit --class com.test.Main example.jarand make sure to check the java version and match with the DBR compatibility

  • 0 kudos
mathijs-fish
by New Contributor III
  • 1948 Views
  • 1 replies
  • 0 kudos

Disable personal compute with the Databricks API or UI

For a production environment, I want to disable the personal compute policy, because I do not want that all users can create personal compute clusters in production. Unfortunately, I am not able to access the account console, so I want to revoke perm...

mathijsfish_0-1702396352437.png mathijsfish_1-1702396390529.png
Get Started Discussions
compute
permissions
policies
  • 1948 Views
  • 1 replies
  • 0 kudos
Latest Reply
arpit
Databricks Employee
  • 0 kudos

@mathijs-fish You need to be admin to disable a policy.

  • 0 kudos
SaiCharan
by New Contributor
  • 4402 Views
  • 1 replies
  • 0 kudos

No space left on device and IllegalStateException: Have already allocated a maximum of 8192 pages

Hello, I'm writing to bring to your attention an issue that we have encountered while working with Data bricks and seek your assistance in resolving it.Context of the Error : When a sql query(1700 lines) is ran, corresponding data bricks job is faili...

  • 4402 Views
  • 1 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

are processing Parquet files or what is the format of your tables? can you split your sql query instead of having a huge query with 1700 lines

  • 0 kudos
Phani1
by Valued Contributor II
  • 2208 Views
  • 3 replies
  • 0 kudos

Autoloader file latency

Hi Team,I would like to understand if there is a metadata table for the autoloader in Databricks that captures information about file arrival and processing.The reason we are experiencing data issues is because our table A receives hundreds of files ...

  • 2208 Views
  • 3 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Check with  cloud_files_state() API You can find examples here https://docs.databricks.com/en/ingestion/auto-loader/production.html#querying-files-discovered-by-auto-loader

  • 0 kudos
2 More Replies
VGS777
by New Contributor III
  • 2435 Views
  • 2 replies
  • 2 kudos

Resolved! Regarding cloning my gitrepo under workspace/Users/user_name

Hi all,I am recently started using databricks. I want to my git repo under workspace/Users/user_name path which I can't able to do it. But i can able to clone only under repo directory by default.Can anyone pls advice me regarding this Thank you

  • 2435 Views
  • 2 replies
  • 2 kudos
Latest Reply
VGS777
New Contributor III
  • 2 kudos

Thanks for this advice . 

  • 2 kudos
1 More Replies
Surajv
by New Contributor III
  • 4004 Views
  • 4 replies
  • 0 kudos

Connect my spark code running in AWS ECS to databricks cluster

Hi team, I wanted to know if there is a way to connect a piece of my pyspark code running in ECS to Databricks cluster and leverage the databricks compute using Databricks connect?I see Databricks connect is for connecting local ide code to databrick...

Get Started Discussions
AWS
databricks connect
ecs
pyspark
  • 4004 Views
  • 4 replies
  • 0 kudos
Latest Reply
Surajv
New Contributor III
  • 0 kudos

Noted @Retired_mod @RonDeFreitas. I am currently using Databricks runtime v12.2 (which is < v13.0). I followed this doc (Databricks Connect for Databricks Runtime 12.2 LTS and below) and connected my local terminal to Databricks cluster and was able ...

  • 0 kudos
3 More Replies
Data_Engineer3
by Contributor III
  • 5207 Views
  • 2 replies
  • 0 kudos

Resolved! spark context in databricks

Hi @all,In Azure Databricks,I am using structured streaming for each batch functionality, in one of the functions I am creating tempview with pyspark dataframe (*Not GlobalTempView) and trying to access the same temp view by using spark.sql functiona...

  • 5207 Views
  • 2 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Do you face this issue without spark streaming as well? Also, could you share a minimal repo code preferably without streaming?

  • 0 kudos
1 More Replies
BabuMahesh
by New Contributor
  • 886 Views
  • 0 replies
  • 0 kudos

Databricks & Bigquery

Databricks is packaging a old version of big-query jar(Databricks also repackaged and created a fat jar), and our application needs a latest jar. Now the latest jar depends on spark-bigquery-connector.properties  file for a property scala.binary.vers...

  • 886 Views
  • 0 replies
  • 0 kudos
rudyevers
by New Contributor III
  • 2020 Views
  • 1 replies
  • 0 kudos

Unity catalog internal error - quality monitoring

I try to get my head around the quality monitoring functionality in Unity Catalog. I configured one of the tables in our unity catalog. My assumption is that the profile and drift metrics tables are automatically created. But when I get an internal e...

rudyevers_0-1702049219646.png rudyevers_0-1702049692739.png
  • 2020 Views
  • 1 replies
  • 0 kudos
Latest Reply
jreddy
New Contributor II
  • 0 kudos

Hi, were you able to resolve this, am having a similar issue - thanks

  • 0 kudos
shubhamshah1412
by New Contributor II
  • 2160 Views
  • 1 replies
  • 0 kudos

Generate Excel for a SQL query

Greetings,I am using a Java Spring boot application that is supposed to respond with an excel based on request. My current approach involves reading data using jdbc drivers, storing them in appropriate data structures, writing them to an excel which ...

  • 2160 Views
  • 1 replies
  • 0 kudos
Latest Reply
shubhamshah1412
New Contributor II
  • 0 kudos

Thanks for putting this together @Retired_mod ,I see that this approach will help to generate an excel after receiving the data from data bricks in the form of resultSet which has to be parsed.I believe this approach is the appropriate way to generat...

  • 0 kudos
hayden_blair
by New Contributor III
  • 4886 Views
  • 0 replies
  • 0 kudos

Error authenticating databricks.sdk.WorkspaceClient with external workspace via Azure Native Auth

I am referencing this doc to initialize a databricks.sdk.WorkspaceClient object instance via Azure Native Authentication. I am initializing this WorkspaceClient within a databricks notebook, but I am trying to use the client to access the Jobs api of...

error.png
Get Started Discussions
authentication
azure
WorkspaceClient
  • 4886 Views
  • 0 replies
  • 0 kudos
Phani1
by Valued Contributor II
  • 2077 Views
  • 1 replies
  • 0 kudos

Data masking best practices

Hi Team,Could you please suggest any best practices/blogs on implementing data masking, row level ,column level ,access control, role-based access control (RBAC), and attribute-based access control (ABAC)?  Regards.Phanindra

  • 2077 Views
  • 1 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Hi, Can you check if this document answers your question: https://www.databricks.com/blog/2020/11/20/enforcing-column-level-encryption-and-avoiding-data-duplication-with-pii.html

  • 0 kudos
Sujitha
by Databricks Employee
  • 7063 Views
  • 0 replies
  • 3 kudos

Unity Catalog Governance Value Levers

What makes Unity Catalog a game-changer? The blog intricately dissects five main value levers: mitigating data and architectural risks, ensuring compliance, accelerating innovation, reducing platform complexity and costs while improving operational e...

Screenshot 2024-01-29 at 11.48.55 AM.png
  • 7063 Views
  • 0 replies
  • 3 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels