- 1289 Views
- 3 replies
- 1 kudos
Hi Team,I recently had a disappointing experience while attempting my first Data bricks certification exam. During the exam, I was abruptly directed to Proctor Support. The proctor asked me to show my desk and the room I was in. I complied by showing...
- 1289 Views
- 3 replies
- 1 kudos
Latest Reply
@Kaniz_Fatma @Cert-Team @Cert-Bricks Requesting you to please look into this and update me since it has not been resolved yet and I am not able to reschedule my exam.
2 More Replies
by
TinaN
• New Contributor III
- 6374 Views
- 3 replies
- 3 kudos
We are loading a data source to Databricks that contains columns with 'Time' datatype. Databricks converts this to 'Timestamp', so I am researching for a way to extract time only. This is what I came up with, but the result isn't quite right. Is th...
- 6374 Views
- 3 replies
- 3 kudos
Latest Reply
Hi @TinaN ,I check it in the evening, but try below: SELECT date_format(timestamp_column, 'HH:mm:ss') AS time_partFROM your_table
2 More Replies
- 1815 Views
- 4 replies
- 0 kudos
we want to create the CI/CD Pipeline for deploying Unity catalog objects inorder to enhance the deployment ability.but How to maintain a repository of Tables/views/ or any other objects created in the catalogs and schema.Is this possible to do just l...
- 1815 Views
- 4 replies
- 0 kudos
Latest Reply
Hi @Prasad_Koneru ,
Thank you for reaching out to our community! We're here to help you.
To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your f...
3 More Replies
- 1284 Views
- 3 replies
- 0 kudos
We are trying to control access to schemas under hive_metastore, only allowing certain users to access the tables under a schema (via SQL, Pyspark, and Python...), we have follow steps in a testing schema:1. Enable workspace table access control2. Ru...
- 1284 Views
- 3 replies
- 0 kudos
Latest Reply
Hi @icyapple ,
Thank you for reaching out to our community! We're here to help you.
To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedba...
2 More Replies
- 1585 Views
- 3 replies
- 1 kudos
Hello All,when I am creating all purpose cluster I am getting an idle time of 3 days, my cluster is terminating after 3days, I want to make my cluster terminate in 60 min of idle time, i want to do it globally so that in future any cluster created by...
- 1585 Views
- 3 replies
- 1 kudos
Latest Reply
Hi @SHASHANK2 ,
Thank you for reaching out to our community! We're here to help you.
To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedb...
2 More Replies
by
VPara
• New Contributor
- 5842 Views
- 0 replies
- 0 kudos
Hi Team,I created a serving endpoint using the mlflow.deployments -> get_deploy_client -> create_endpoint. The endpoint was created successfully but it was not going to the ready state as the update failed with the error message being "Exceeded maxim...
- 5842 Views
- 0 replies
- 0 kudos
- 926 Views
- 1 replies
- 0 kudos
Hello Team, I encountered Pathetic experience while attempting my Professional Data Engineer DataBricks certification. Abruptly, Proctor asked me to show my desk, after 30 mins of exam showing he/she asked multiple times.. wasted my time and then sus...
- 926 Views
- 1 replies
- 0 kudos
by
jaybi
• New Contributor III
- 1194 Views
- 2 replies
- 1 kudos
AssertionError: The Databricks Runtime is expected to be one of ['11.3.x-scala2.12', '11.3.x-photon-scala2.12', '11.3.x-cpu-ml-scala2.12'], found "15.3.x-cpu-ml-scala2.12". Please see the "Troubleshooting | Spark Version" section of the "Version In...
- 1194 Views
- 2 replies
- 1 kudos
- 1329 Views
- 2 replies
- 2 kudos
Currently I am creating unit tests for our ETL scripts although the test is not able to recognize sc (SparkContext).Is there a way to mock SparkContext for a unit test? Code being tested: df = spark.read.json(sc.parallelize([data])) Error message rec...
- 1329 Views
- 2 replies
- 2 kudos
Latest Reply
Was able to get this to work.What I had to do was instantiate the "sc" variable in the PySpark notebook.PySpark code:"sc = spark.SparkContext"Then in the PyTest script we add a "@patch()" statement with the "sc" variable and create a "mock_sc" variab...
1 More Replies
- 1796 Views
- 0 replies
- 0 kudos
Hi,We created a service principal in Databricks as per the documentation here.However, when we execute the following SQL query, we are unable to see the service principal: SHOW GRANTS testservice ON METASTOREerror:[RequestId=564cbcf9-e8b7-476d-a4db-...
- 1796 Views
- 0 replies
- 0 kudos
- 520 Views
- 0 replies
- 0 kudos
My Databricks Certified data engineer associate exam got suspended on 18 July 2024 .I was continuously in front of the camera and an alert appeared and then my exam resumed. Then later a support person told me that your exam got suspended. I Don't kn...
- 520 Views
- 0 replies
- 0 kudos
- 634 Views
- 1 replies
- 1 kudos
I started my exam but it said due to a technical issue it had been suspended though I checked all prerequisites and system checks.I have already raised the ticket please resolve this issue as early as possible.Ticket no.: #00504757#reschedule #issue...
- 634 Views
- 1 replies
- 1 kudos
Latest Reply
@Retired_mod , Please resolve this issue as early as possible. I have already raised the ticket
- 2793 Views
- 2 replies
- 1 kudos
I am running this command: databricks bundle deploy --profile DAVE2_Dev --debug And I am getting this error: 10:13:28 DEBUG open dir C:\Users\teffs.THEAA\OneDrive - AA Ltd\Databricks\my_project\dist: open C:\Users\teffs.THEAA\OneDrive - AA Ltd\Databr...
- 2793 Views
- 2 replies
- 1 kudos
Latest Reply
So I found a link to a page that said that the databricks bundle command is expecting python3.exe instead of python.exe. So I took a copy of python.exe and renamed it to python3.exe and that seems to work. Thanks for investigating though.
1 More Replies
- 1172 Views
- 3 replies
- 1 kudos
I have a table where there are two columns and both are primary key, I want to do delta load when taking data from source to target. Any idea how to implement this?
- 1172 Views
- 3 replies
- 1 kudos
Latest Reply
But that shouldn't be a problem. In merge condition you check both keys as in example above. If combination of two keysb already exists in the table then do nothing. If there is new combination of key1 and key2 just insert it into target table.It's t...
2 More Replies
- 1812 Views
- 3 replies
- 0 kudos
I want to store the output of my cell as a text file in my local hard drive.I'm getting the json output and I need that json in my local drive as a text file.
- 1812 Views
- 3 replies
- 0 kudos
Latest Reply
Hi @InquisitiveGeek ,You can do this following below approach: https://learn.microsoft.com/en-us/azure/databricks/notebooks/notebook-outputs#download-results
2 More Replies