Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
How to connect your Azure Data Lake Storage to Azure DatabricksStandard Workspace Private linkIn your storage accounts please go to “Networking” -> “Private endpoint connections” and click Add Private Endpoint.It is important to add private links in ...
If I create a job from the web UI and I select Python wheel, I can add kwargs parameters. Judging from the generated JSON job description, they appear under a section named `namedParameters`.However, if I use the REST APIs to create a job, it appears...
@GabrieleMuciacc , in case of serverless compute job this can be pass as external dependency you can't use libraries. "tasks": [{ "task_key": task_id, "spark_python_task": { "python_file": py_file, ...
Hello, I am trying to serve a model endpoint (using Databricks GUI) for a model that was successfully logged to the Model Registry. However, the endpoint creation failed with the following errors: Endpoint logs with error messagesEndpoint events with...
Hi @Nikhil Gajghate We haven't heard from you since the last response from @Kaniz Fatma , and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpful to o...
Hey everyone! I´m trying to access table row using databricks api. Using Insomnia or postman, to test and the error are the same: { "error_code": "ENDPOINT_NOT_FOUND", "message": "Endpoint not found for /2.0/sql/statements/"}Below is my request:(for ...
Hi @Denis Macedo Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers ...
Hi everyone, I'm getting this exact error as shown bellow, when trying to create an automl experiment. This happens both through the UI and the API with my code or with databrick's example code. I've tried looking into this but had no luck finding an...
Hi @Diogo Almeida Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Than...
I would like to download a file in DBFS using the FileStore Endpoint.If the file or folder name contains multibyte characters, the file path cannot be specified due to URL encoding and an error occurs.Question 1: If a file or folder name contains mul...
Hi,Databricks CLI can be used to download a file from DBFS. https://docs.databricks.com/dev-tools/cli/index.htmlAlso, you can refer to https://stackoverflow.com/questions/49019706/databricks-download-a-dbfs-filestore-file-to-my-local-machine , which ...
I'm using the Databricks SQL Endpoint and I'm attempting to create an external table on top of an existing parquet file. I can do this so long as my table definition does not include a reference to a decimal or timestamp/date datatype.ex. This worksC...
Hey there @T A Hope everything is going great!Does @Kaniz Fatma's response answer your question? If yes, would you be happy to mark it as best so that other members can find the solution more quickly? If not, would you be happy to give us more info...
Hi,I have datalake gen2 with vnet and private endpoint. I do have databricks workspace in same vnet. I am trying to access the datalake from databricks but I keep getting error when I allow access only for selected network in datalake. I get error w...
Hello,We are trying to load a Delta table from an Azure Data Lake Storage container into Power BI using the Databricks SQL Endpoint.We configured the SQL Workspace data to have access to the ADLS Delta table and created a view; we are able to query t...
@Marius Condescu Could you please include below spark config and try-spark.hadoop.fs.azure.account.oauth.provider.type.ariaprime.dfs.core.windows.net org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProviderspark.hadoop.fs.azure.account.auth.typ...
the delta tables after ETL are stored in s3 in csv or parquet format, so now question is how to allow databricks sql endpoint to run query over s3 saved files
I am trying to run command to retrieve change data from sql endpoint. It is throwing below error."The input query contains unsupported data source(s).Only csv, json, avro, delta, parquet, orc, text data sources are supported on Databricks SQL."But th...