cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Cameron_Afzal
by New Contributor II
  • 1592 Views
  • 1 replies
  • 0 kudos

I'm unable to create an account for Databricks Community Edition. I've tried multiple email addresses and browsers across multiple attempts. I...

I'm unable to create an account for Databricks Community Edition. I've tried multiple email addresses and browsers across multiple attempts. I fill out and submit the sign-up form but never receive the email and thus can't log in. Any advice? Are the...

  • 1592 Views
  • 1 replies
  • 0 kudos
Latest Reply
tipu
New Contributor II
  • 0 kudos

i have try the same thing it doesn't work. can someone please help us?

  • 0 kudos
LukaszJ
by Contributor III
  • 1214 Views
  • 0 replies
  • 0 kudos

Real time query plotting

Hello,I have a table on Azure Databricks that I keep updating with the "A" notebook.And I want to real time plotting the query result from the table (let's say SELECT COUNT(name), name FROM my_schema.my_table GROUP BY name).I know about Azure Applica...

  • 1214 Views
  • 0 replies
  • 0 kudos
LukaszJ
by Contributor III
  • 2584 Views
  • 2 replies
  • 1 kudos

Table access control cluster with R language

Hello,I want to have a high concurrency cluster with table access control and I want to use R language on it.I know that the documentation says that R and Scala is not available with table access control.But maybe you have some tricks or best practic...

  • 2584 Views
  • 2 replies
  • 1 kudos
Latest Reply
Aashita
Databricks Employee
  • 1 kudos

@Łukasz Jaremek​, Currently it is only available in Python and SQL.

  • 1 kudos
1 More Replies
samrachmiletter
by New Contributor III
  • 5601 Views
  • 2 replies
  • 5 kudos

Resolved! Is it possible to set order of precedence of spark SQL extensions?

I have the iceberg SQL extension installed, but running commands such as MERGE INTO result in the error pyspark.sql.utils.AnalysisException: MERGE destination only supports Delta sources.this seems to be due to using Delta's MERGE command as opposed ...

  • 5601 Views
  • 2 replies
  • 5 kudos
Latest Reply
samrachmiletter
New Contributor III
  • 5 kudos

This does help. I tried going through the DataFrameReader as well but ran into the same error, so it seems it is indeed not possible. Thank you @Hubert Dudek​!

  • 5 kudos
1 More Replies
Hila_DG
by Databricks Partner
  • 5435 Views
  • 5 replies
  • 4 kudos

Resolved! How to proactively monitor the use of the cache for driver node?

The problem:We have a dataframe which is based on the query:SELECT * FROM Very_Big_TableThis table returns over 4 GB of data, and when we try to push the data to Power BI we get the error message:ODBC: ERROR [HY000] [Microsoft][Hardy] (35) Error from...

  • 5435 Views
  • 5 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hey @Hila Galapo​ Hope everything is going good. Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.Thanks!

  • 4 kudos
4 More Replies
findinpath
by Contributor
  • 7260 Views
  • 7 replies
  • 4 kudos

Resolved! Please share Databricks JDBC Driver on Maven Central

Can you please share the Databricks JDBC Driver on Maven Central ?I see it available on : https://databricks.com/spark/jdbc-drivers-download . However I can’t find it on Maven Central to make use of it in automated tests connecting to Databricks infr...

  • 7260 Views
  • 7 replies
  • 4 kudos
Latest Reply
findinpath
Contributor
  • 4 kudos

Thank you for the assistance and for releasing the jdbc driver to Maven Central.I consider the issue closed.

  • 4 kudos
6 More Replies
herry
by New Contributor III
  • 3868 Views
  • 3 replies
  • 0 kudos

Resolved! Using AWS glue schema registry in Databricks Autoloader

Hi All,I plan to store the schema of my table in AWS glue schema registry. Is there any simple way to use it in Databricks Autoloader?My goal is to build a data pipeline with Autoloader for schema validation.

  • 3868 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hey there @Herry Ramli​ Hope all is well!Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 0 kudos
2 More Replies
KrishnaWahi
by New Contributor II
  • 4125 Views
  • 3 replies
  • 2 kudos

Resolved! Query Databricks SQL Endpoint Database from NodeJs

I have my Databricks SQL Endpoint running, I have created many tables there using AWS S3 Delta. Now I want to query the Databricks SQL Endpoint from NodeJs. So It is possible ?I tried to find and researched a lot but didn't get any useful tutorial fo...

  • 4125 Views
  • 3 replies
  • 2 kudos
Latest Reply
BilalAslamDbrx
Databricks Employee
  • 2 kudos

We are very interested in supporting two capabilities -- in fact these are being worked on already:The ability to issue a SQL query over REST and track its status, retrieve results etc.A NodeJS SDK for Databricks SQLKeep an eye on our release notes

  • 2 kudos
2 More Replies
Jessy
by New Contributor
  • 26373 Views
  • 6 replies
  • 3 kudos
  • 26373 Views
  • 6 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Pierre MASSE​ Thank you so much for getting back to us. It's really great of you to send in the solution and mark the answer as best. We really appreciate your time.Wish you a great Databricks journey ahead!

  • 3 kudos
5 More Replies
kenldk
by New Contributor III
  • 5397 Views
  • 4 replies
  • 4 kudos

Resolved! When will the bills from Databricks arrive?

I am using Databricks for the first time and after 3 months I didn't see a single bill from Databricks. However, the accumulated usage has reached $180. Currently my workspace status is still running. Do I need to terminate my workspace to get billed...

  • 5397 Views
  • 4 replies
  • 4 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 4 kudos

@Ken Lei​ , Additionally, in the case of Azure bill is added to the Microsoft invoice.

  • 4 kudos
3 More Replies
Kiedi7
by Databricks Partner
  • 5624 Views
  • 2 replies
  • 3 kudos

Resolved! Databricks Community Edition - Enable Databricks SQL

Hi,I am trying to enable the Databricks SQL environment from the Community Edition workspace (using left menu pane). However, the options I see in the dropdown menu are or Data Science & Eng and Machine Learning workspaces/environments. Is Databricks...

  • 5624 Views
  • 2 replies
  • 3 kudos
Latest Reply
BilalAslamDbrx
Databricks Employee
  • 3 kudos

+1 on what Andrew said. We are not currently planning on enabling Databricks SQL in Community. Please use a trial account on one of the supported clouds.

  • 3 kudos
1 More Replies
SimhadriRaju
by New Contributor
  • 5996 Views
  • 1 replies
  • 1 kudos

rename a mount point folder

I am reading the data from a folder /mnt/lake/customer where mnt/lake is the mount path referring to ADLS Gen 2, Now I would like to rename a folder from /mnt/lake/customer to /mnt/lake/customeraddress without copying the data from one folder to ano...

  • 5996 Views
  • 1 replies
  • 1 kudos
Latest Reply
Atanu
Databricks Employee
  • 1 kudos

https://docs.databricks.com/data/databricks-file-system.html#local-file-api-limitations this might help @Simhadri Raju​ 

  • 1 kudos
mani238
by New Contributor III
  • 6688 Views
  • 4 replies
  • 4 kudos
  • 6688 Views
  • 4 replies
  • 4 kudos
Latest Reply
mani238
New Contributor III
  • 4 kudos

Hi @Kaniz Fatma​  , I got the solution based on the @Hubert Dudek​  Answer .Thanks @Hubert Dudek​  . Another Doubt:How do i Automate the Azure Synapse Concept . Please help me ..Thanks

  • 4 kudos
3 More Replies
Vik1
by New Contributor II
  • 7911 Views
  • 4 replies
  • 2 kudos

Resolved! Data persistence, Dataframe, and Delta

I am new to databricks platform. what is the best way to keep data persistent so that once I restart the cluster I don't need to run all the codes again?So that I can simply continue developing my notebook with the cached data.I have created many dat...

  • 7911 Views
  • 4 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

Hey there @Vivek Ranjan​ Hope you are doing great!Just wanted to check in if you were able to resolve your issue or do you need more help? We'd love to hear from you.Thanks!

  • 2 kudos
3 More Replies
Orianh
by Valued Contributor II
  • 4231 Views
  • 0 replies
  • 0 kudos

Retrieve a row from indexed spark data frame.

Hello guys, I'm having an issue when trying to get a row values from spark data frame.I have a DF with index column, and i need to be able to return a row based on index in fastest way possible .I tried to partitionBy index column, optimize with zor...

  • 4231 Views
  • 0 replies
  • 0 kudos
Labels