I am running hourly job on a cluster using p3.2xlarge GPU instance, but sometimes cluster couldn't start due to instance unavailability. I wander is there is any fallback mechanism to, for example, try a different instance type if one is not availabl...
(AWS only) For anyone experiencing capacity related cluster launch failures on non-GPU instance types, AWS Fleet instance types are now GA and available for clusters and instance pools. They help improve chance of successful cluster launch by allowi...
As per this thread Databricks now integrates with EC2 CreateFleet API that allows customers to create Databricks pools and get EC2 instances from multiple AZs and multiple instance families & sizes. However, in the Databricks UI you can not select mo...
Fleet instances on Databricks is now GA and available in all AWS workspaces - you can find more details here: https://docs.databricks.com/compute/aws-fleet-instances.html
Hi All, I hope you're super well. I need your recommendations and solution for my problem.I am using a Databricks instance DS12_v2 which has 28GB RAM and 4 cores. I am ingesting 7.2 million rows into a SQL Server table and it is taking 57 min - 1 hou...
You can try to use BULK INSERT.https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver16Also using Data Factory instead of Databricks for the copy can be helpful.
Hi! We've recently provisioned an Azure Databricks workspace and started building our pipelines. Do we qualify as Databricks 'customers' who have free access to all self-paced content on Databricks Academy? If so, how do we access it? We don't have a...
They changed the registration process and added 'Additional Fields' section, where you can provide your company email address, that you use in Azure Databricks. This worked automatically for me and I can access the self-paced trainings for free now.
Hi All,I want to run an ETL pipeline in a sequential way in my DB notebook. If I run it without resetting the Spark session or restarting the cluster I am getting a data frame key error. I think this might be because of the Spark cache because If I r...
Is there a solution to the above problem? I also would like to restart SparkSession to free my cluster's resources, but when callingspark.stop()the notebook automatically detach and the following error occurs:The spark context has stopped and the dri...
Hello,our spark jobs stream messages from Event Hub then transform it and finally the messages are peristed in storage. We plan to exercise cluster configurations for these jobs in order to find the optimal and procure Azure reservations. Furtemore, ...
Dear Databricks Certification Team,Unfortunately, I was unable to take the exam as scheduled due to an unforeseen power breakdown in my area. The power outage occurred just before the exam, rendering me unable to access the necessary resources to com...
Databricks docs here:https://docs.databricks.com/notebooks/notebook-isolation.htmlstate that "Every notebook attached to a cluster has a pre-defined variable named spark that represents a SparkSession." What if 2 users run the same notebook on the sa...
The spark session is isolated at the notebook level and is not isolated at the user level. So, two users accessing the same notebook will be using the same spark session
Hello,I am experiencing issues with importing from utils repo the schema file I created.this is the logic we use for all ingestion and all other schemas live in this repo utills/schemasI am unable to access the file I created for a new ingestion pipe...
@Debayan Mukherjee​ Hello, thank you for your response. please let me know if these are the correct commands to access the file from notebookI can see the files in the repo folderbut I just noticed this. the file I am trying to access the size is 0 b...
I have tried to read data from Databricks using the following java code.String TOKEN = "token...";
String url = "url...";
Properties properties = new Properties();
properties.setProperty("user", "token");
properties.setProperty("PWD", TOKEN);
Con...
@Binesh J​ - The issue could be due to the data type of the column is not compatible with getString() method in line#17. use getObject() method to retrieve the value as a generic value and then convert to string.
I am trying to read the changes data from snowflake query into the dataframe using Databricks.Same query is working in snowflake but not in Databricks. Both sides timezones and format are same for the timestamp. I am trying to implement changetrackin...
you are format is wrong that's why you got an errortry thisSELECT * FROM TestTable CHANGES(INFORMATION => DEFAULT) AT(TIMESTAMP => TO_TIMESTAMP_TZ('2023-05-03 00:43:34.885','YYYY-MM-DD HH24:MI:SS.FF'))
I would like to set the permissions to jobs such as granting "CAN_VIEW" or "CAN_MANAGE" to specific groups that run from ADF. It appears that we need to set permissions in pipe line where job runs from ADF, But I could not figure it out. ​​
Thank you @Debayan Mukherjee​ and @Vidula Khanna​ for getting back to me. But, it didn't help my case. I am specifically looking for setting permissions to the job so that our team can see the job cluster including Spark UI with that privilege. ...
I have a job with multiple tasks running asynchronously and I don't think its leveraging all the nodes on the cluster based on runtime. I open the Spark UI for the cluster and checkout the executors and don't see any tasks for my worker nodes. How ca...
Hi @Dave Hiltbrand​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question. Thanks.