cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

karthik_p
by Esteemed Contributor
  • 1211 Views
  • 2 replies
  • 1 kudos

Resolved! Do we need to Request Databrikcs to Enable MOSIAC ML

HI Team,I am not seeing any specific articles/guides to use MOSIAC ML on Databricks. After Acquiring MOSIAC ML does anything got changed in terms of MOSIAC ML Use or just use just regular function  

  • 1211 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @karthik_p, We can build a thriving shared knowledge and insights community. Come back and mark the best answers to contribute to our ongoing pursuit of excellence.

  • 1 kudos
1 More Replies
hexoffender
by New Contributor
  • 538 Views
  • 1 replies
  • 0 kudos

case statements return same value

I have these 4 case statements count(*) as Total_claim_reciepts,count(case when claim_id like '%M%' and receipt_flag = 1 and is_firstpassclaim = 1 then 0 else claim_id end) as Total_claim_reciepts,count(case when claim_status ='DENIED' and claim_repa...

  • 538 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @hexoffender, It seems that the four case statements contain some logic overlap, which is probably causing the same output to be produced for all four cases. For example, the second case statement counts the total number of claim receipts where c...

  • 0 kudos
nicolas1
by New Contributor
  • 561 Views
  • 1 replies
  • 0 kudos

Suggestions for python its not working

Tab and Shift tab its not suggestin nothing in python codeTab only work as indentation and shift + tab do  nothing.What can I do?

  • 561 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @nicolas1 ,  If you're encountering issues with auto-completion in Databricks Notebooks, here are some steps to troubleshoot and resolve the problem: Check Language Settings: Ensure that the language kernel for your notebook is correctly set. For...

  • 0 kudos
databicky
by Contributor II
  • 2406 Views
  • 3 replies
  • 0 kudos

how to edit or delete the post in this community

how to edit or delete the post in this community

  • 2406 Views
  • 3 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @databicky and @PankajKadam, you can definitely edit/delete your post from this toggle on the right-hand side of your post. Let me know if you don't have this option.    

  • 0 kudos
2 More Replies
AdamStra2
by New Contributor III
  • 871 Views
  • 3 replies
  • 1 kudos

Web terminal and clusters

Hi, I have come across this piece of documentation:Databricks does not support running Spark jobs from the web terminal. In addition, Databricks web terminal is not available in the following cluster types:Job clustersClusters launched with the DISAB...

  • 871 Views
  • 3 replies
  • 1 kudos
Latest Reply
AdamStra2
New Contributor III
  • 1 kudos

Hi @Kaniz ,any update on my question? Thanks.

  • 1 kudos
2 More Replies
sm1274
by New Contributor
  • 1281 Views
  • 1 replies
  • 0 kudos

Creating java UDF for Spark SQL

Hello, I have created a sample java UDF which masks few characters of a string. However I facing couple of issues when uploading and using it.First I could only import it, which for now is OK. But when do the following,create function udf_mask as 'ba...

  • 1281 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @sm1274, The error message you received indicates that the CREATE FUNCTION statement is not supported on a Databricks SQL endpoint. This statement is the specific error message you're seeing indicating that you're trying to run the CREATE FUNCTION...

  • 0 kudos
s_park
by Valued Contributor II
  • 15264 Views
  • 3 replies
  • 4 kudos

Training @ Data & AI World Tour 2023

Join your peers at the Data + AI World Tour 2023! Explore the latest advancements, hear real-world case studies and discover best practices that deliver data and AI transformation. From the Databricks Lakehouse Platform to open source technologies in...

Screenshot 2023-10-09 at 10.42.55 AM.png
Get Started Discussions
DAIWT
DAIWT_2023
Training
User_Group
  • 15264 Views
  • 3 replies
  • 4 kudos
Latest Reply
VjGian15
New Contributor II
  • 4 kudos

Introducing Mini Flush: Your Ticket to Ultimate Casino Thrills!Are you ready to embark on an electrifying journey into the world of online gambling? If so, look no further than Vijaybet Online Casino! Our state-of-the-art platform is your gateway to ...

  • 4 kudos
2 More Replies
sg-vtc
by New Contributor III
  • 1045 Views
  • 1 replies
  • 1 kudos

Resolved! Problem creating external delta table on non-AWS s3 bucket

I am testing Databricks with non-AWS S3 object storage.  I can access the non-AWS S3 bucket by setting these parameters:sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", "XXXXXXXXXXXXXXXXXXXX")sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key...

sgvtc_0-1697817308224.png sgvtc_1-1697817308223.png sgvtc_2-1697817308221.png
Get Started Discussions
external delta table
  • 1045 Views
  • 1 replies
  • 1 kudos
Latest Reply
sg-vtc
New Contributor III
  • 1 kudos

Found the solution to disable it.  Can close this question.

  • 1 kudos
Data_Analytics1
by Contributor III
  • 889 Views
  • 2 replies
  • 0 kudos

Getting secret from Key Vault of previous version

Hi,I have added secrets in Azure Key Vault and also updated it few times. I need to access current as well as previous version secret in a data pipeline. dbutils.secrete.get(KeyName, SecretScopeName)This gives me the current version of secret.How can...

  • 889 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Data_Analytics1, To access a specific version of a secret in Azure Key Vault using dbutils.secrets.get(), you need to append the version number to the secret name.

  • 0 kudos
1 More Replies
AdamStra2
by New Contributor III
  • 10619 Views
  • 1 replies
  • 3 kudos

Resolved! Schema owned by Service Principal shows error in PBI

Background info:1. We have unity catalog enabled. 2. All of our jobs are run by Service Principal that has all necessary access it needs.Issue:One of the jobs checks existing schemas against the ones it is supposed to create in that given run and if ...

pic.png
  • 10619 Views
  • 1 replies
  • 3 kudos
Latest Reply
Kaniz
Community Manager
  • 3 kudos

Hi @AdamStra2, This may be related to ownership chaining in SQL Server. Ownership chaining is a security feature in SQL Server that's designed to allow users to access objects in a database without requiring explicit permissions on the object itself....

  • 3 kudos
HHol
by New Contributor
  • 635 Views
  • 0 replies
  • 0 kudos

How to retrieve a Job Name from the SparkContext

We are currently starting to build certain data pipelines using Databricks.For this we use Jobs and the steps in these Jobs are implemented in Python Wheels.We are able to retrieve the Job ID, Job Run ID and Task Run Id in our Python Wheels from the ...

  • 635 Views
  • 0 replies
  • 0 kudos
cmilligan
by Contributor II
  • 3002 Views
  • 7 replies
  • 1 kudos

Long run time with %run command

My team has started to see long run times on cells when using the %run commands to run another notebook. The notebook that we are calling with %run only contains variable setting, defining functions, and library imports. In some cases I have seen in ...

  • 3002 Views
  • 7 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @cmilligan ,  - Long run times with %run command could be due to notebook size and complexity, Databricks cluster load, and network latency.- %run command executes another notebook immediately, making its functions and variables available in the c...

  • 1 kudos
6 More Replies
Ajbi
by New Contributor II
  • 1489 Views
  • 2 replies
  • 0 kudos

NATIVE_XML_DATA_SOURCE_NOT_ENABLED

I'm trying to read an xml file and receiving the following error. I've installed the maven library spark xml to the cluster, however I'm receiving the error. is there anything i'm missing?ErrorAnalysisException: [NATIVE_XML_DATA_SOURCE_NOT_ENABLED] N...

  • 1489 Views
  • 2 replies
  • 0 kudos
Latest Reply
Ajbi
New Contributor II
  • 0 kudos

i've tried already  spark.read.format('com.databricks.spark.xml'). it receives the same error.  

  • 0 kudos
1 More Replies
liv1
by New Contributor II
  • 1063 Views
  • 2 replies
  • 1 kudos

Structured Streaming from a delta table that is a dump of kafka and get the latest record per key

I'm trying to use Structured Streaming in scala to stream from a delta table that is a dump of a kafka topic where each record/message is an update of attributes for the key and no messages from kafka are dropped from the dump, but the value is flatt...

  • 1063 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @liv1 ,  To get the latest message per key in your streaming job and perform stream-stream joins, you can use Databricks Delta's time travel feature in combination with foreachBatch(). You can use Delta's time travel feature to maintain the latest...

  • 1 kudos
1 More Replies
eric2
by New Contributor II
  • 1322 Views
  • 3 replies
  • 0 kudos

Databricks Delta table Insert Data Error

When trying to insert data into the Delta table in databricks, an error occurs as shown below. [TASK_WRITE_FAILED] Task failed while writing rows to abfss://cont-01@dlsgolfzon001.dfs.core.windows.net/dir-db999_test/D_RGN_INFO_TMP.In SQL, the results ...

  • 1322 Views
  • 3 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

seems ok to me, have you tried to display the data from table A and also the B/C join?

  • 0 kudos
2 More Replies
Labels
Top Kudoed Authors