- 3303 Views
- 0 replies
- 0 kudos
Hi team,I'm struck in a Spark Structured streaming use-case.Requirement: To read two streaming data frames, perform a left join on it and display the results. Issue: While performing a left join, the resultant data frame contains only rows where ther...
- 3303 Views
- 0 replies
- 0 kudos
by
dZegpi
• New Contributor II
- 1284 Views
- 1 replies
- 0 kudos
I'm working with Databricks and Google Cloud in the same project. I want to load specific datasets stored in GCP into a R notebook in Databricks. I currently can see the datasets in BigQuery. The problem is that using the sparklyr package, I'm not ab...
- 1284 Views
- 1 replies
- 0 kudos
by
vish93
• New Contributor II
- 1349 Views
- 0 replies
- 1 kudos
AI art generator uses artificial intelligence to create captivating artworks, redefining the boundaries of traditional creativity and enabling endless artistic possibilities.AI photo restoration is a groundbreaking technology that employs artificial ...
- 1349 Views
- 0 replies
- 1 kudos
by
Phani1
• Valued Contributor II
- 6046 Views
- 0 replies
- 0 kudos
Hi Team, Could you please suggest, Do we have an alternate approach to alter the table instead of creating a new table and copying the data as part of the deployment.
- 6046 Views
- 0 replies
- 0 kudos
by
Xyrion
• New Contributor II
- 1605 Views
- 1 replies
- 0 kudos
I am trying to use the constraints options:NOT ENFORCEDDEFERRABLEINITIALLY DEFERREDNORELYHowever it seems I am not able to use them successfully. When I try to use them with PRIMARY KEYS (not sure if it is possible), I am not able to enforce any key....
- 1605 Views
- 1 replies
- 0 kudos
Latest Reply
BTW the forum is bugged I can't paste code..
- 1752 Views
- 1 replies
- 0 kudos
I am trying to create a multi-task Databricks Job in Azure Cloud with its own cluster.Although I was able to create a single task job without any issues, the code to deploy the multi-task job fails due to the following cluster validation error:error:...
- 1752 Views
- 1 replies
- 0 kudos
Latest Reply
Hello @Retired_mod, thanks for your answer, but the problem keeps the same. I had already tested with different cluster configurations, single-node and multi-node, including those cluster configurations which worked with single task jobs, but the err...
by
Aria
• New Contributor III
- 9363 Views
- 2 replies
- 2 kudos
Hi,I am new to databricks, We are trying to use Databricks asset bundles for code deployment .I have spect a lot of time but still so many things are not clear to me.Can we change the target path of the notebooks deployed from /shared/.bundle/* to so...
- 9363 Views
- 2 replies
- 2 kudos
Latest Reply
Hi @Retired_mod,Thank you for your post, I thought it will solve my issues too, however after reading your suggestion it was nothing new for me, because l already have done exactly that.. Here is what I have dome so you or anyone can replicate it:1. ...
1 More Replies
by
Phani1
• Valued Contributor II
- 622 Views
- 0 replies
- 0 kudos
Hi Team,My requirement is ,i do have File A from source A which needs to write into Multiple Delta tables i.e DeltaTableA,DeltaTableB,DeltaTableC. Is it possible to have a single instance of an autoloader script. (multiple write streams). Could you p...
- 622 Views
- 0 replies
- 0 kudos
- 1014 Views
- 0 replies
- 0 kudos
I want to connect to Databricks from Knime on a company computer that uses a proxy. The error I'm encountering is as follows: ERROR Create Databricks Environment 3:1 Execute failed: Could not open the client transport with JDBC URI: jdbc:hive2://adb-...
- 1014 Views
- 0 replies
- 0 kudos
- 3256 Views
- 2 replies
- 0 kudos
I am facing issue while calling dbutils.notebook.run() inside of pyspark streaming with concurrent.executor. At first the error is "pyspark.sql.utils.IllegalArgumentException: Context not valid. If you are calling this outside the main thread,you mus...
- 3256 Views
- 2 replies
- 0 kudos
Latest Reply
The error message you're encountering in PySpark when using dbutils.notebook.run() suggests that the context in which you are attempting to call the run() method is not valid. PySpark notebooks in Databricks have certain requirements when it comes to...
1 More Replies
- 718 Views
- 0 replies
- 0 kudos
Hey all,I've been using a voice cloning AI and it's working well. I'm thinking of using Databricks to make a machine learning model for speech tech. I want to start with personal content creation. Any tips or advice would be great!
- 718 Views
- 0 replies
- 0 kudos
- 367 Views
- 0 replies
- 0 kudos
Hey community,I've been using this voice cloning AI and it's working well. I'm thinking of using Databricks to make a machine learning model for speech tech. I want to start with personal content creation.Any tips or advice would be great!
- 367 Views
- 0 replies
- 0 kudos
by
Phani1
• Valued Contributor II
- 1547 Views
- 0 replies
- 0 kudos
Hi Team ,Unity catalog is not enabled in our workspace, We would like to know the billing usage information per user ,could you please help us how to get these details( by using notebook level script).Regards,Phanindra
- 1547 Views
- 0 replies
- 0 kudos
- 4858 Views
- 1 replies
- 0 kudos
from pyspark.sql import SparkSessionfrom pyspark import SparkContext, SparkConffrom pyspark.storagelevel import StorageLevelspark = SparkSession.builder.appName('TEST').config('spark.ui.port','4098').enableHiveSupport().getOrCreate()df4 = spark.sql('...
- 4858 Views
- 1 replies
- 0 kudos
Latest Reply
Thank you so much for taking time and explaining the concepts
- 950 Views
- 0 replies
- 0 kudos
For my exam i have to do a small project for the company im interning at. I am creating a datawarehouse where i will have to transfer data from another database, and then transforming it to a star schema. would databricks be good for this, or is it t...
- 950 Views
- 0 replies
- 0 kudos