We use a private PyPI repo (AWS CodeArtifact) to publish custom python libraries. We make the private repo available to DBR 12.2 clusters using an init-script as prescribed here in the Databricks KB....
Dear all, Greetings! I have been trying to run a workflow job which runs successfully when a task is created using a Notebook file from a folder present in the Workspace but when the same task's ty...
Hello Team, I am new to Databrciks. Generally where all the logs will be stored in Databricks. I see if any job fails below the command i could see some error messages. Otherwise in real time how t...
We were using this method and this was working as expected in Databricks 13.3. def read_file():
try:
df_temp_dlr_kpi = spark.read.load(raw_path,format="csv", schema=kpi_schema)...
Hello, I was experimenting with a ML model with different parameters and check the results. However, the important part of this code is contained in a couple of cells (say cell # 12, 13 &...
Hi, I'm trying to set up a local development environment using python / vscode / poetry. Also, linting is enabled (Microsoft pylance extension) and the python.analysis.typeCheckingMode is set t...
Am trying to get azure databricks cluster metrics such as memory utilization, CPU utilization, memory swap utilization, free file system using REST API by writing pyspark code. Its showing alwa...
I am looking for a possible way to get the autoscaling history data for SQL Serverless Warehouses using API or logs. I want something like what we see in monitoring UI.  
When I attach a notebook to my cluster and run a cell the notebook is detached. Cell execution states:Waiting for compute to be ready Then the attached message is shown. Notebook detached Excepti...