Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
Hello,We have recently created a notebook in order to allow users inserting/updating values in specific tables. The logic behind the update statements is included in a separate notebook where users don't have access. However we would like to know if ...
When you want users to perform some write action (for example, change parameters, etc.), it is usually easiest to build a small app in Azure PowerApps, save those values, and extract them to the table in Delta Lake (so your notebooks will take values...
Hi,I'm using a DBR 13.3 LTS ML, and I want to set up a webhook trigger. I'm following the example notebook at https://learn.microsoft.com/en-us/azure/databricks/_extras/notebooks/source/mlflow/mlflow-model-registry-webhooks-python-client-example.html...
Hi @FranPérez ,
The issue is that databricks-registry-webhooks has a databricks.proto file that collides with mlflow. Here is the fix:
%pip install databricks-registry-webhooks mlflow==2.2.2
I also posted the fix on StackOverflow: https://stackoverfl...
Hi all,I've been training at https://partner-academy.databricks.com/ and I see this tab for My Gamification, however, whenever I open it it always says 0 badges, 0 points. I have completed a number of courses, but there's no change. Is this feature...
When referencing a Technical Blog in a LinkedIn Post, the image of the author is displayed and not the image/picture of the blog itself - annoying.Example: Linkedin Post: https://www.linkedin.com/posts/axelschwanke_star-struct-the-secret-life-of-t...
Hi!I am following this guide: https://learn.microsoft.com/en-us/azure/databricks/security/network/serverless-network-security/serverless-private-linkHowever in Step 3: Create private endpoint rules, number 6 there is no option for me to Add a private...
import whisperimport ffmpegmodel = whisper.load_model("base")transcription = model.transcribe("dbfs:/FileStore/Call_Center_Conversation__03.mp3")print(transcription["text"])FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'I have import...
Hello everyone,I would like to know if it was possible to transform, with PySpark, a flat file stored in a directory in Azure Blob storage into bytes format to be able to parse it, while using the connection already integrated into the cluster betwee...
Hi everyone,I want to trigger a run for a job using API Call.Here's my code"""import shleximport subprocessdef call_curl(curl):args = shlex.split(curl)process = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)stdout...
I have an AWS based databricks account with a few workspaces and an Azure Databricks workspace. How do I combine them into one account?I am particularly interested in setting up a single billing drop with all my Databricks costs.
Hi @BillGuyTheScien Greetings!
Currently, we do not have such a feature to combine multiple cloud usage into a single account. We do have a feature request for the same and it is considered for future. Currently, there is no ETA on that.
You can bro...
Hello everyone!I was reading VCF files using the glow library (Maven: io.projectglow:glow-spark3_2.12:1.2.1).The last version of this library only works with the spark's version 3.3.2 so if I need to use a newer runtime with a more recent spark versi...
When i was trying to create catalog i got an error saying to mention azure storage account and storage container in the following query -CREATE CATALOG IF NOT EXISTS Databricks_Anu_Jal_27022024MANAGED LOCATION 'abfss://<databricks-workspace-stack-anu...
Hi community, I wanted to understand if there is a way to pass config values to spark session in runtime than using databricks-connect configure to run spark code. One way I found out is given here: https://stackoverflow.com/questions/63088121/config...
Why might this be erroring out? My understanding is that SparkR is built into Databricks.Code:library(SparkR, include.only=c('read.parquet', 'collect'))sparkR.session() Error:Error in sparkR.session(): could not find function "sparkR.session"
We have enabled workspace level sso , and have V2.0 version of databricks using azure EntraID groups and azure applications.Values in both databricks and azure application matchStill we get sso auth failed error.How can this be resolved , SAML tracer...