cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

sasidhar
by New Contributor II
  • 2603 Views
  • 4 replies
  • 8 kudos

custom python module not found while using dbx on pycharm

Am new to databricks and pyspark. Building a pyspark application using pycharm IDE. I have tested the code in local and wanted to run on databricks cluster from IDE itself. Following the dbx documentation and able to run the single python file succes...

  • 2603 Views
  • 4 replies
  • 8 kudos
Latest Reply
Meghala
Valued Contributor II
  • 8 kudos

Even I got error​

  • 8 kudos
3 More Replies
najmead
by Contributor
  • 1261 Views
  • 2 replies
  • 0 kudos

Error Creating Primary Key Constraint

I am trying to add a primary key constraint to an existing table, and I get the following error;Cannot create or update table because the child column(s) `my_primary_key` of primary key `pk` cannot be set to nullable. Either drop the constraint, or c...

  • 1261 Views
  • 2 replies
  • 0 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 0 kudos

Hi, Could you please confirm if you are using the latest databricks-sql-connector ? (https://pypi.org/project/databricks-sql-connector/)

  • 0 kudos
1 More Replies
Bhanu1
by New Contributor III
  • 745 Views
  • 2 replies
  • 0 kudos

The new horizontal view of tasks *****. Can we please have the option for vertical view of a workflow?

The new horizontal view of tasks *****. Can we please have the option for vertical view of a workflow?

  • 745 Views
  • 2 replies
  • 0 kudos
Latest Reply
Bhanu1
New Contributor III
  • 0 kudos

Hi Debayan,This was how workflows used to look like before  These are now shown from left to right instead of from top to bottom. It is a pain to scroll through a long workflow now as mouses don't have the capability to scroll left and right.

  • 0 kudos
1 More Replies
data_explorer
by New Contributor II
  • 543 Views
  • 1 replies
  • 0 kudos

Is there anyway to execute grant and revoke statements to a user for an object based on a condition?

SELECT if((select count(*) from information_schema.table_privileges where grantee = 'samo@test.com' and table_schema='demo_schema' and table_catalog='demo_catalog')==1, (select count(*) from demo_catalog.demo_schema.demo_table), (select count(*) from...

  • 543 Views
  • 1 replies
  • 0 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 0 kudos

Hi, GRANT and REVOKE are privileges on an securable object to a principal. And a principal is a user, service principal, or group known to the metastore. Principals can be granted privileges and may own securable objects.Also, you can use REVOKE ON S...

  • 0 kudos
SaravananPalani
by New Contributor II
  • 18248 Views
  • 8 replies
  • 9 kudos

Is there any way to monitor the CPU, disk and memory usage of a cluster while a job is running?

I am looking for something preferably similar to Windows task manager which we can use for monitoring the CPU, memory and disk usage for local desktop.

  • 18248 Views
  • 8 replies
  • 9 kudos
Latest Reply
hitech88
New Contributor II
  • 9 kudos

Some important info to look in Gangalia UI in CPU, memory and server load charts to spot the problem:CPU chart :User %Idle %High percentage of user % indicates heavy CPU usage in the cluster.Memory chart : Use %Free %Swap % If you see purple line ove...

  • 9 kudos
7 More Replies
najmead
by Contributor
  • 9279 Views
  • 6 replies
  • 13 kudos

How to convert string to datetime with correct timezone?

I have a field stored as a string in the format "12/30/2022 10:30:00 AM"If I use the function TO_DATE, I only get the date part... I want the full date and time.If I use the function TO_TIMESTAMP, I get the date and time, but it's assumed to be UTC, ...

  • 9279 Views
  • 6 replies
  • 13 kudos
Latest Reply
Rajeev_Basu
Contributor III
  • 13 kudos

use from_utc_timestamp(to_timestam("<string>", <format>),<timezone>)

  • 13 kudos
5 More Replies
ironising84
by New Contributor II
  • 2989 Views
  • 4 replies
  • 6 kudos

Question on Databricks Spark online proctored exam

Some silly questions folks. I took online proctored Databricks spark certification couple of days back and my unofficial result was pass. I received a mail that it might https://speedtest.vet/ take upto one week to receive the certification, if awar...

  • 2989 Views
  • 4 replies
  • 6 kudos
Latest Reply
Rajeev_Basu
Contributor III
  • 6 kudos

better would have been to ask for permission before drinking. I can share my exp. My mobile alarm started buzzing during the exam, I requested the moderator, he then paused the exam and asked me to take my laptop to the mobile and then to switch off,...

  • 6 kudos
3 More Replies
elgeo
by Valued Contributor II
  • 14000 Views
  • 5 replies
  • 1 kudos

Resolved! SQL Stored Procedure in Databricks

Hello. Is there an equivalent of SQL stored procedure in Databricks? Please note that I need a procedure that allows DML statements and not only Select statement as a function provides.Thank you in advance

  • 14000 Views
  • 5 replies
  • 1 kudos
Latest Reply
Meghala
Valued Contributor II
  • 1 kudos

Thanks it's also helpfull to me​

  • 1 kudos
4 More Replies
lambarc
by New Contributor II
  • 9290 Views
  • 7 replies
  • 13 kudos

How to read file in pyspark with “]|[” delimiter

The data looks like this: pageId]|[page]|[Position]|[sysId]|[carId 0005]|[bmw]|[south]|[AD6]|[OP4 There are atleast 50 columns and millions of rows. I did try to use below code to read: dff = sqlContext.read.format("com.databricks.spark.csv").option...

  • 9290 Views
  • 7 replies
  • 13 kudos
Latest Reply
rohit199912
New Contributor II
  • 13 kudos

you might also try the blow option.1). Use a different file format: You can try using a different file format that supports multi-character delimiters, such as text JSON.2). Use a custom Row class: You can write a custom Row class to parse the multi-...

  • 13 kudos
6 More Replies
Marcel
by New Contributor III
  • 20402 Views
  • 4 replies
  • 2 kudos

Resolved! Set environment variables in global init scripts

Hi Databricks Community,I want to set environment variables for all clusters in my workspace.The goal is to have environment (dev, prod) specific environment variables values.Instead of set the environment variables for each cluster, a global script ...

  • 20402 Views
  • 4 replies
  • 2 kudos
Latest Reply
brickster
New Contributor II
  • 2 kudos

We have set the env variable at Global Init script as below,sudo echo DATAENV=DEV >> /etc/environmentand we try to access the variable in notebook that run with "Shared" cluster mode. import os print(os.getenv("DATAENV"))But the env variable is not a...

  • 2 kudos
3 More Replies
tecku71
by New Contributor III
  • 963 Views
  • 3 replies
  • 3 kudos

How to publish Notebook Dashboard without possiblity to "exit" FullScreen?

Is there a way to remove the "exit" Button from the fullscreen within the sparks Notebook - Dashboard ?

  • 963 Views
  • 3 replies
  • 3 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 3 kudos

Could you please share a screenshot of what you see. I dont see any exit button. Or I might be looking at a wrong place.

  • 3 kudos
2 More Replies
519776
by New Contributor III
  • 5103 Views
  • 15 replies
  • 1 kudos

Resolved! How to create connection between Databricks & BigQuery

Hi, I would like to connect our BigQuery env to Databricks, So I created a service account but where should I configure the service account in Databricks? I read databricks documention and it`s not clear at all. Thanks for your help

  • 5103 Views
  • 15 replies
  • 1 kudos
Latest Reply
karthik_p
Esteemed Contributor
  • 1 kudos

@kfiry​ adding to @Werner Stinckens​ did you added projectid in read spark query , projectid should be one where big query instance running. also please follow best practices in terms of egress data cost spark.read.format("bigquery") \ .option("tabl...

  • 1 kudos
14 More Replies
yousry
by New Contributor II
  • 1407 Views
  • 2 replies
  • 2 kudos

Resolved! What is the best way to find deltalake version on OSS and Databricks at runtime?

To identify certain deltalake features available on a certain installation, it is important to have a robust way to identify deltalake version. For OSS, I found that the below Scala snippet will do the job.import io.delta println(io.delta.VERSION)Not...

  • 1407 Views
  • 2 replies
  • 2 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 2 kudos

@Yousry Mohamed​ - could you please check the DBR runtime release notes for the Delta lake API compatibility matrix section ( DBR version vs Delta lake compatible version) for the mapping.Reference: https://docs.databricks.com/release-notes/runtime/r...

  • 2 kudos
1 More Replies
Zachary_Higgins
by Contributor
  • 5248 Views
  • 7 replies
  • 12 kudos

ignoreDeletes' option with Delta Live Table streaming source

We have a delta streaming source in our delta live table pipelines that may have data deleted from time to time. The error message is pretty self explanatory:...from streaming source at version 191. This is currently not supported. If you'd like to i...

  • 5248 Views
  • 7 replies
  • 12 kudos
Latest Reply
Michael42
New Contributor III
  • 12 kudos

I'd am looking at this as well and would like to understand my options here.

  • 12 kudos
6 More Replies
User16826994223
by Honored Contributor III
  • 1505 Views
  • 3 replies
  • 2 kudos

Resolved! Limitation as of now in delta live table

I am thinking of using delta live table, before that I want to be aware of the limitations it has as of now when it s announced on datasummit 2021

  • 1505 Views
  • 3 replies
  • 2 kudos
Latest Reply
Zachary_Higgins
Contributor
  • 2 kudos

There doesn't appear to be a way to enforce a retention policy on source tables when defining a structured stream. Setting the options for "ignoreChanges" and "ignoreDeletes" doesn't seem to have any effect at all. CDC does not fill this role either,...

  • 2 kudos
2 More Replies
Labels
Top Kudoed Authors