cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

KrishnaWahi
by New Contributor II
  • 1597 Views
  • 4 replies
  • 2 kudos

Resolved! Query Databricks SQL Endpoint Database from NodeJs

I have my Databricks SQL Endpoint running, I have created many tables there using AWS S3 Delta. Now I want to query the Databricks SQL Endpoint from NodeJs. So It is possible ?I tried to find and researched a lot but didn't get any useful tutorial fo...

  • 1597 Views
  • 4 replies
  • 2 kudos
Latest Reply
Kaniz
Community Manager
  • 2 kudos

Hi @Krishna Wahi​, Just a friendly follow-up. Do you still need help, or @Bilal Aslam​'s response help you to find the solution? Please let us know.

  • 2 kudos
3 More Replies
findinpath
by Contributor
  • 2058 Views
  • 9 replies
  • 4 kudos

Resolved! Please share Databricks JDBC Driver on Maven Central

Can you please share the Databricks JDBC Driver on Maven Central ?I see it available on : https://databricks.com/spark/jdbc-drivers-download . However I can’t find it on Maven Central to make use of it in automated tests connecting to Databricks infr...

  • 2058 Views
  • 9 replies
  • 4 kudos
Latest Reply
findinpath
Contributor
  • 4 kudos

Thank you for the assistance and for releasing the jdbc driver to Maven Central.I consider the issue closed.

  • 4 kudos
8 More Replies
Hila_DG
by New Contributor II
  • 1622 Views
  • 5 replies
  • 4 kudos

Resolved! How to proactively monitor the use of the cache for driver node?

The problem:We have a dataframe which is based on the query:SELECT * FROM Very_Big_TableThis table returns over 4 GB of data, and when we try to push the data to Power BI we get the error message:ODBC: ERROR [HY000] [Microsoft][Hardy] (35) Error from...

  • 1622 Views
  • 5 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hey @Hila Galapo​ Hope everything is going good. Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.Thanks!

  • 4 kudos
4 More Replies
herry
by New Contributor III
  • 1033 Views
  • 4 replies
  • 1 kudos

Resolved! Using AWS glue schema registry in Databricks Autoloader

Hi All,I plan to store the schema of my table in AWS glue schema registry. Is there any simple way to use it in Databricks Autoloader?My goal is to build a data pipeline with Autoloader for schema validation.

  • 1033 Views
  • 4 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hey there @Herry Ramli​ Hope all is well!Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 1 kudos
3 More Replies
Jessy
by New Contributor
  • 7534 Views
  • 6 replies
  • 3 kudos
  • 7534 Views
  • 6 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Pierre MASSE​ Thank you so much for getting back to us. It's really great of you to send in the solution and mark the answer as best. We really appreciate your time.Wish you a great Databricks journey ahead!

  • 3 kudos
5 More Replies
Hubert-Dudek
by Esteemed Contributor III
  • 517 Views
  • 1 replies
  • 20 kudos

From Databricks runtime 10.5 you can get metadata using the hidden _metadata column. Currently, the column contains input files information (file_path...

From Databricks runtime 10.5 you can get metadata using the hidden _metadata column. Currently, the column contains input files information (file_path, file_name, file_size and file_modification_time)

firefox_2022-05-06_17-26-52
  • 517 Views
  • 1 replies
  • 20 kudos
Latest Reply
Kaniz
Community Manager
  • 20 kudos

Amazing post @Hubert Dudek​ !

  • 20 kudos
karolinalbinsso
by New Contributor II
  • 1743 Views
  • 2 replies
  • 3 kudos

Resolved! How to access the job-Scheduling Date from within the notebook?

I have created a job that contains a notebook that reads a file from Azure Storage. The file-name contains the date of when the file was transferred to the storage. A new file arrives every Monday, and the read-job is scheduled to run every Monday. I...

  • 1743 Views
  • 2 replies
  • 3 kudos
Latest Reply
Kaniz
Community Manager
  • 3 kudos

Hi @Karolin Albinsson​  , Just a friendly follow-up. Do you still need help, or @Hubert Dudek (Customer)​ 's response help you to find the solution? Please let us know.

  • 3 kudos
1 More Replies
Kiedi7
by New Contributor
  • 2519 Views
  • 2 replies
  • 3 kudos

Resolved! Databricks Community Edition - Enable Databricks SQL

Hi,I am trying to enable the Databricks SQL environment from the Community Edition workspace (using left menu pane). However, the options I see in the dropdown menu are or Data Science & Eng and Machine Learning workspaces/environments. Is Databricks...

  • 2519 Views
  • 2 replies
  • 3 kudos
Latest Reply
BilalAslamDbrx
Honored Contributor II
  • 3 kudos

+1 on what Andrew said. We are not currently planning on enabling Databricks SQL in Community. Please use a trial account on one of the supported clouds.

  • 3 kudos
1 More Replies
SimhadriRaju
by New Contributor
  • 2553 Views
  • 2 replies
  • 1 kudos

rename a mount point folder

I am reading the data from a folder /mnt/lake/customer where mnt/lake is the mount path referring to ADLS Gen 2, Now I would like to rename a folder from /mnt/lake/customer to /mnt/lake/customeraddress without copying the data from one folder to ano...

  • 2553 Views
  • 2 replies
  • 1 kudos
Latest Reply
Atanu
Esteemed Contributor
  • 1 kudos

https://docs.databricks.com/data/databricks-file-system.html#local-file-api-limitations this might help @Simhadri Raju​ 

  • 1 kudos
1 More Replies
Vik1
by New Contributor II
  • 3268 Views
  • 4 replies
  • 2 kudos

Resolved! Data persistence, Dataframe, and Delta

I am new to databricks platform. what is the best way to keep data persistent so that once I restart the cluster I don't need to run all the codes again?So that I can simply continue developing my notebook with the cached data.I have created many dat...

  • 3268 Views
  • 4 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hey there @Vivek Ranjan​ Hope you are doing great!Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.Thanks!

  • 2 kudos
3 More Replies
Orianh
by Valued Contributor II
  • 1409 Views
  • 0 replies
  • 0 kudos

Retrieve a row from indexed spark data frame.

Hello guys, I'm having an issue when trying to get a row values from spark data frame.I have a DF with index column, and i need to be able to return a row based on index in fastest way possible .I tried to partitionBy index column, optimize with zor...

  • 1409 Views
  • 0 replies
  • 0 kudos
CrisBerg_65149
by New Contributor III
  • 1674 Views
  • 6 replies
  • 6 kudos

Resolved! SELECT * FROM delta doesn't work on Spark 3.2

Using DBR 10 or later and I’m getting an error when running the following querySELECT * FROM delta.`s3://some_path`getting org.apache.spark.SparkException: Unable to fetch tables of db deltaFor 3.2.0+ they recommend reading like this:CREATE TEMPORAR...

  • 1674 Views
  • 6 replies
  • 6 kudos
Latest Reply
CrisBerg_65149
New Contributor III
  • 6 kudos

Got support from Databricks.Unfortunately, someone created a DB called delta, so the query was done against that DB instead. Issue was solved

  • 6 kudos
5 More Replies
Development
by New Contributor III
  • 2445 Views
  • 8 replies
  • 5 kudos

Delta Table with 130 columns taking time

Hi All,We are facing one un-usual issue while loading data into Delta table using Spark SQL. We have one delta table which have around 135 columns and also having PARTITIONED BY. For this trying to load 15 millions of data volume but its not loading ...

  • 2445 Views
  • 8 replies
  • 5 kudos
Latest Reply
Development
New Contributor III
  • 5 kudos

@Kaniz Fatma​ @Parker Temple​  I found an root cause its because of serialization. we are using UDF to drive an column on dataframe, when we are trying to load data into delta table or write data into parquet file we are facing  serialization issue ....

  • 5 kudos
7 More Replies
Labels
Top Kudoed Authors