Is there a working setup on setting up metrics export to CloudWatch while using custom docker images for cluster creation? I've tried to set up the CloudWatch agent manually, but launching `amzon-cloudwatch-agent-ctl` in the bootstrap script fails wi...
We do not support gangila with custom docker too. but let me cross verify if we are supporting cloudwatch for the same. Sorry for the inconvenience @Sergey Ivanychev​
Hello Community,I've got informed from Help desk to post this issue in community.We've contacted all supportive entities: billing team, help desk and sales team,but the issue hasn't solved yet.My team(Ars Praxia) has issue of sudden cancellation of s...
Hi @Jayeon Jang​ , Thank you for reaching out!I understand how frustrating this must have been for you.We value our customers’ time, and this should not have happened.I appreciate you making us aware of your negative experience.I will relay this mess...
We need to execute a long running exe running on a windows machine and thinking of ways to integrate with the workflow. The plan is to include the exe as a task in the Databricks workflow.​​We are thinking of couple of approachesCreate a DB table and...
About Cloud Fetch mentioned in this article:https://databricks.com/blog/2021/08/11/how-we-achieved-high-bandwidth-connectivity-with-bi-tools.htmlAre there any public APIs that can be called directly without ODBC or JDBC drivers? Thanks.
Hi,I would like to deploy Databricks workspaces to build a delta lakehouse to server both scheduled jobs/processing and ad-hoc/analytical querying workloads. Databricks users comprise of both data engineers and data analysts. In terms of requirements...
I have a databricks job on E2 architecture in which I want to retrieve the workspace instance name within a notebook running in a Job cluster context so that I can use it further in my use case. While the call dbutils.notebook.entry_point.getDbutils(...
Found workaround for Azure Databricks question above: dbutils.notebook.getContext().apiUrl will return the regional URI, but this forwards to the workspace-specific one if the workspace id is specified with o=.
I don't think it will be possible. However, you can raise a feature request via our ideas portal with the requirements so that it might be considered in the future.https://docs.databricks.com/resources/ideas.html
As the titles states I would like to hear how others have setup an AWS s3 bucket to source data with auto loader while supporting the capabilities to archive files after a certain period of time into glacier objects. We currently have about 20 millio...
@Ken Pendergast​ To setup Databricks with auto loader, please follow the below document. https://docs.databricks.com/spark/latest/structured-streaming/auto-loader.htmlFetching data from Glacier is not supported. however, you can try one of the follo...
Short version: I need a way to take only the most recent record from a variable number of tables in a stream. This is a relatively easy problem in sql or python pandas (group by and take the newest) but in a stream I keep hitting blocks. I could do i...
Did you try storing it all to a DELTA table with a MERGE INTO [1]? You can optionally specify a condition on "WHEN MATCHED" such that you only insert if the timestamp is newer.[1] https://docs.databricks.com/spark/latest/spark-sql/language-manual/del...
dbutils.widgets.text('table', 'product')
%sql
select *
from ds_data.$tableHello, the above will work.But how can I do something like:dbutils.widgets.text('table', 'product')
%sql
select *
from ds_data.$table_v3in that example, $table is still my ...
What is the best way to delete from the delta table? In my case, I want to read a table from the MySQL database (without a soft delete column) and then store that table in Azure as a Delta table. When the ids are equal I will update the Delta table w...
Hi have the similar issue, I don't see the solution is provided here. I want to perform upcert operation. But along with upcert, I want to delete the records which are missing in source table, but present in the target table. You can think it as a ma...
Hi Everyone,I was trying to install the newest python version on the Databricks Clusters and it has the runtime version 7.3 LTS, but no matter how many times I try it keeps installing the 3.7.5 version of python.I know that Runtime version 7.3 LTS co...
Hi @Nuthan Peddapurapu​ , Not supported with Databricks Runtime 7 and above at the momenthttps://docs.databricks.com/libraries/cluster-libraries.html#library
We're trying to pull a big amount of data using databricks sql and seem to have a bottleneck on network throughput when fetching the data.I see there's a new feature called cloud fetch and this seems to be the perfect solution for our issue. But I do...
Trying to get an idea of what you are trying:so you query directly on a database of +100GB or is it parquet/delta source?Also, where is the result fetched to? File download, BI tool, ...?