cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

jt
by New Contributor III
  • 1612 Views
  • 3 replies
  • 2 kudos

SQL table alias autocomplete

I have a table with 600 columns and the table name is long. I want to use a table alias with autocomplete but it's not working. Any ideas how I can get this to work? works%sql   --autocomplete works SELECT verylongtablename.column200 verylongtabl...

  • 1612 Views
  • 3 replies
  • 2 kudos
Latest Reply
jt
New Contributor III
  • 2 kudos

My cluster is running fine. Does autocomplete work for you with a table alias?

  • 2 kudos
2 More Replies
avidex180899
by New Contributor II
  • 4670 Views
  • 3 replies
  • 3 kudos

Resolved! UUID/GUID Datatype in Databricks SQL

Hi all,I am trying to create a table with a GUID column.I have tried using GUID, UUID; but both of them are not working.Can someone help me with the syntax for adding a GUID column?Thanks!

  • 4670 Views
  • 3 replies
  • 3 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 3 kudos

Hey @Avinash Narasimhan​ , What is the exact problem you are getting can you please share it is working fine for meThanksAviral Bhardwaj

  • 3 kudos
2 More Replies
Jyo777
by Contributor
  • 847 Views
  • 4 replies
  • 0 kudos

Hi, Has anyone cleared professional DE? please advise on professional data engineer exam. will advance DE learning path be sufficient? Or need to fol...

Hi,Has anyone cleared professional DE? please advise on professional data engineer exam. will advance DE learning path be sufficient? Or need to follow some other resource as well.

  • 847 Views
  • 4 replies
  • 0 kudos
Latest Reply
youssefmrini
Honored Contributor III
  • 0 kudos

Hello Have a look at this link http://msdatalab.net/how-to-pass-the-professional-databricks-data-engineering/

  • 0 kudos
3 More Replies
architect
by New Contributor
  • 925 Views
  • 1 replies
  • 0 kudos

Does Databricks provide a mechanism to have rate limiting for receivers?

from pyspark.sql import SparkSession   scala_version = '2.12' spark_version = '3.3.0'   packages = [ f'org.apache.spark:spark-sql-kafka-0-10_{scala_version}:{spark_version}', 'org.apache.kafka:kafka-clients:3.2.1' ]   spark = SparkSession.bui...

  • 925 Views
  • 1 replies
  • 0 kudos
Latest Reply
Rajani
Contributor
  • 0 kudos

hi @Software Architect​  i dont think so

  • 0 kudos
Pranjan
by New Contributor II
  • 1670 Views
  • 7 replies
  • 1 kudos

Resolved! Badge Not Received for - Databricks Lakehouse Fundamentals Accreditation (V2)

Hi TeamI have passed the Databricks Lakehouse Fundamentals Accreditation (V2) on Dec 8th.Still have not received the Badge in credentials or any email of that kind.Please have a look.@Kaniz Fatma​ â€‹ 

  • 1670 Views
  • 7 replies
  • 1 kudos
Latest Reply
Tromen026
New Contributor II
  • 1 kudos

I wonder how much attempt you set to create this type of excellent informative web site.marco's pizza starr avedomino's pizza price

  • 1 kudos
6 More Replies
Smitha1
by Valued Contributor II
  • 948 Views
  • 3 replies
  • 2 kudos

December exam free voucher for Databricks Certified Associate Developer for Apache Spark 3.0 exam.

Dear @Vidula Khanna​  Hope you're having great day. This is of HIGH priority for me, I've to schedule exam in December before slots are full.I gave Databricks Certified Associate Developer for Apache Spark 3.0 exam on 30th Nov but missed by one perc...

  • 948 Views
  • 3 replies
  • 2 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 2 kudos

hey @Smitha Nelapati​ ,you can attend the below webinars and get the 75% off in Jan ​ 

  • 2 kudos
2 More Replies
KasimData
by New Contributor III
  • 1501 Views
  • 3 replies
  • 6 kudos

Unable to signup to a Databricks community edition account

As you can see, I get the error underneath the big orange button. This is after I click the link at the bottom to try the community edition. I have tried a couple of locations since I am currently based in South Korea but I am actually from the UK. T...

image.png
  • 1501 Views
  • 3 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @Muhammad Ali​ Just a friendly follow-up. Are you able to log in to your Community-Edition account? If yes, then mark the answer as best or if you need further assistance kindly let me know. Thanks and Regards

  • 6 kudos
2 More Replies
sudhanshu1
by New Contributor III
  • 1688 Views
  • 1 replies
  • 0 kudos

Write streaming output to DynamoDB

Hi All,I am trying to write a streaming DF into dynamoDB with below code.tumbling_df.writeStream \  .format("org.apache.spark.sql.execution.streaming.sinks.DynamoDBSinkProvider") \  .option("region", "eu-west-2") \  .option("tableName", "PythonForeac...

  • 1688 Views
  • 1 replies
  • 0 kudos
Latest Reply
LandanG
Honored Contributor
  • 0 kudos

Hi @SUDHANSHU RAJ​ ,I can't seem to find much on the "DynamoDBSinkProvider" source. Have you checked out the link for the streaming to DynamoDB documentation?

  • 0 kudos
Chris_Shehu
by Valued Contributor III
  • 1553 Views
  • 3 replies
  • 3 kudos

Resolved! Is there a way to specify a header, set the delimiter, etc...in DLT?

I was looking forward to using the Data Quality features that are provided with DLT but as far as I can the ingestion process is more restrictive than other methods. It doesn't seem like you can do much as far as setting delimiter type, headers or an...

  • 1553 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

DLT uses Autoloader to ingest data. With autoloader, you can provide read options for the table. https://docs.databricks.com/ingestion/auto-loader/options.html#csv-options has the docs on CSV. I attached a picture of an example.

  • 3 kudos
2 More Replies
pkgltn
by New Contributor III
  • 1366 Views
  • 2 replies
  • 2 kudos

Resolved! Load an Excel File (located in Databricks Repo connected to Azure DevOps) into a dataframe

Hi, How can I load an Excel File (located in Databricks Repo connected to Azure DevOps) into a dataframe? When I pass the full path into the load method, it displays an error.java.io.FileNotFoundException Has someone done it previously?

  • 1366 Views
  • 2 replies
  • 2 kudos
Latest Reply
pkgltn
New Contributor III
  • 2 kudos

Hi,Just managed to do it.Upgraded the cluster to the latest version because Files in Repos only works in most recent versions of the cluster.When loading the dataframe, specify the path as follows: file:/Workspace/Repos/user@email.com/filepath/filena...

  • 2 kudos
1 More Replies
hf_santos
by New Contributor III
  • 4675 Views
  • 4 replies
  • 2 kudos

Resolved! Error when importing PyDeequ package

Hi everyone,I want to do some tests regarding data quality and for that I pretend to use PyDeequ on a databricks notebook. Keep in mind that I'm very new to databricks and Spark.First I created a cluster with the Runtime version "10.4 LTS (includes A...

  • 4675 Views
  • 4 replies
  • 2 kudos
Latest Reply
hf_santos
New Contributor III
  • 2 kudos

I assumed I wouldn't need to add the Deequ library. Apparently, all I had to do was add it via Maven coordinates and it solved the problem.

  • 2 kudos
3 More Replies
db-avengers2rul
by Contributor II
  • 895 Views
  • 1 replies
  • 0 kudos

Jupyter notebooks import in databricks notebooks

Dear Team,Is it possible to import jupyter notebooks in databricks community edition ? if yes will there be any formatting issues ?

  • 895 Views
  • 1 replies
  • 0 kudos
Latest Reply
db-avengers2rul
Contributor II
  • 0 kudos

if yes is there any limit ? , what is the difference or advantage using juypter notebooks over databricks notebooks

  • 0 kudos
db-avengers2rul
by Contributor II
  • 616 Views
  • 2 replies
  • 0 kudos

What is the underlying database used in data bricks community edition in sql

Dear DB Experts,I am reaching out to check whether can i still use postgressql in notebooks with notebook as sql and try postgresql , as far as i know from reading the back end db is mysql correct me my understanding ?

  • 616 Views
  • 2 replies
  • 0 kudos
Latest Reply
db-avengers2rul
Contributor II
  • 0 kudos

are there any setting i have to update ?

  • 0 kudos
1 More Replies
J15S
by New Contributor III
  • 205 Views
  • 0 replies
  • 0 kudos

Why does the Spark user differ if using sparklyr from Spark SQL?

My team uses a shared cluster. We've been having issues with spark_connect failing to work at times (can't easily reproduce). One thing I've recently noticed is that the Spark user through sparklyr seems to be set to the first person who connects to ...

  • 205 Views
  • 0 replies
  • 0 kudos
Abbe
by New Contributor II
  • 1053 Views
  • 1 replies
  • 0 kudos

Update data type of a column within a table that has a GENERATED ALWAYS AS IDENTITY-column

I want to cast the data type of a column "X" in a table "A" where column "ID" is defined as GENERATED ALWAYS AS IDENTITY. Databricks refer to overwrite to achieve this: https://docs.databricks.com/delta/update-schema.htmlThe following operation:(spar...

  • 1053 Views
  • 1 replies
  • 0 kudos
Latest Reply
Abbe
New Contributor II
  • 0 kudos

Looks like it works when using GENERATED BY DEFAULT AS IDENTITY instead. There's no way of updating the schema from GENERATED ALWAYS AS IDENTITY to GENERATED BY DEFAULT AS IDENTITY, right? I have to create a new table (and then insert it with data fr...

  • 0 kudos
Labels
Top Kudoed Authors