cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

kll
by New Contributor III
  • 2258 Views
  • 2 replies
  • 0 kudos

pass a tuple as parameter to sql query

at_lst = ['131','132','133'] at_tup = (*at_lst,) print(at_tup) ('131','132','133')<div> <div><span>In my sql query, i am trying to pass this on a parameter, however, it doesn't work. <div> <div><div><div><span>%sql<br /><div><span>select * from ma...

  • 2258 Views
  • 2 replies
  • 0 kudos
Latest Reply
kll
New Contributor III
  • 0 kudos

@Kaniz_Fatma  I am writing sql using the magic command in the cell block, `%%sql`. Is there a way to pass a parameter in the query without using the `execute` method of the cursor object? Can you please share an example? 

  • 0 kudos
1 More Replies
esi
by New Contributor
  • 1979 Views
  • 1 replies
  • 0 kudos

Ingesting PowerBI Tables to databricks

Hi Community,I am looking for a way to access the Power BI tables from databricks and import them as a spark dataframe into my databricks notebook.As far as I have seen, there is a Power BI connector to load data from databricks into Power BI but not...

  • 1979 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @esi, I'm sorry, but currently, there is no direct way to import Power BI tables into Databricks as a Spark dataframe. The connectors available are designed to work in the opposite direction, i.e., to load data from Databricks into Power BI for vi...

  • 0 kudos
Automation-path
by New Contributor II
  • 1838 Views
  • 1 replies
  • 0 kudos
  • 1838 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Automation-path, Creating relationships between tables in a database, often called establishing "joins," is not explicitly mentioned in the provided sources. However, it can be inferred from the SQL code examples.  You typically use SQL JOIN cla...

  • 0 kudos
Automation-path
by New Contributor II
  • 2896 Views
  • 1 replies
  • 0 kudos
  • 2896 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Automation-path, Normalization is a process in database design that organizes data to minimize redundancy and avoid anomalies. It involves dividing larger tables into smaller tables and linking them using relationships. The primary goals of norm...

  • 0 kudos
Automation-path
by New Contributor II
  • 1377 Views
  • 1 replies
  • 0 kudos
  • 1377 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Automation-path , A Relational Database Management System (RDBMS) is a type of database management system that stores data in a structured format, using rows and columns. This type of system supports a relational model, meaning that the data and ...

  • 0 kudos
Automation-path
by New Contributor II
  • 581 Views
  • 1 replies
  • 0 kudos

Data categories for databases

Is there a way to automate data categorisation with OpenAI API?

  • 581 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Automation-path, you can automate data categorisation using the OpenAI API. Databricks provides a built-in SQL function ai_generate_text()that allows you to access large language models (LLMs) like OpenAI directly from SQL. This function can be u...

  • 0 kudos
lin
by New Contributor
  • 606 Views
  • 0 replies
  • 0 kudos

Facing UNKNOWN_FIELD_EXCEPTION.NEW_FIELDS_IN_FILE

[UNKNOWN_FIELD_EXCEPTION.NEW_FIELDS_IN_FILE] Encountered unknown fields during parsing: [<field_name>], which can be fixed by an automatic retry: trueI am using Azure Databricks, and write with python code. Want to catch the error and raise. Tried wi...

  • 606 Views
  • 0 replies
  • 0 kudos
od
by New Contributor
  • 581 Views
  • 1 replies
  • 0 kudos

How do I regulate notebook cache

I am experiencing error of over caching in databricks notebook. If i display different dfs one of the dfs get cache which after the result of others afterwards. Please how can I avoid the cache memory while using the notebook?

  • 581 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

I don't exactly understand what your issue is. Can you elaborate more?

  • 0 kudos
kurtrm
by New Contributor III
  • 4097 Views
  • 5 replies
  • 5 kudos

Resolved! How to send alert when cluster is running for too long

Hello,Our team recently experienced an issue where a teammate started a new workflow job then went on vacation. This job ended up running continuously without failing for 4.5 days. The usage of the cluster did not seem out of place during the workday...

  • 4097 Views
  • 5 replies
  • 5 kudos
Latest Reply
kurtrm
New Contributor III
  • 5 kudos

@Kaniz_Fatma,I ended up creating a job leveraging the Databricks Python SDK to check cluster and active job run times. The script will raise an error and notify the team if the cluster hasn't terminated or restarted in the past 24 hours or if a job h...

  • 5 kudos
4 More Replies
mderela
by New Contributor II
  • 3622 Views
  • 6 replies
  • 2 kudos

Resolved! inegstion time clustering

Hello, in rerence to https://www.databricks.com/blog/2022/11/18/introducing-ingestion-time-clustering-dbr-112.htmlI have a silly question how to use it. So let's assume that I have a few TB of not partitioned data. So, if I would like to query on dat...

  • 3622 Views
  • 6 replies
  • 2 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 2 kudos

Hi @mderela ,  If you have ingested data using ingestion time clustering, you can use the ingesttimestamp column to filter data based on when it was ingested. Your query would look like this: SELECT * FROM mytable WHERE ingesttimestamp >= current_tim...

  • 2 kudos
5 More Replies
karankumara
by New Contributor
  • 573 Views
  • 0 replies
  • 0 kudos

DBX Sync Command --unmatched-behaviour=unspecified-delete-unmatched not working

We are using dbx command to sync the objects from the local to Databricks workspace, we are using the below command to sync the data,dbx sync workspace --unmatched-behaviour=unspecified-delete-unmatched  -s /tmp -d /tmpWe have deleted some files loca...

  • 573 Views
  • 0 replies
  • 0 kudos
Piyush
by New Contributor
  • 2898 Views
  • 1 replies
  • 1 kudos

how to resolve "Error pushing changes" Remote ref update was rejected issue

how to resolve "Error pushing changes" Remote ref update was rejected issue even after having all edit access on remote ado repo

  • 2898 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

 Hi @Piyush, The "Error pushing changes" issue you're facing could be due to several reasons. 1. Shallow Checkout: If you're working with a shallow clone of the repository, you might face issues while pushing changes. This is because shallow clones d...

  • 1 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors