by
yjiao
• New Contributor
- 1165 Views
- 1 replies
- 0 kudos
Dear all,I tried to use Databricks migration tool (https://github.com/databrickslabs/migrate) to migrate objects from one Databricks instance to another. I realized that notebooks, clusters, jobs can be done but queries can not be migrated by this to...
- 1165 Views
- 1 replies
- 0 kudos
Latest Reply
@yjiao If you're planning to migrate from your current technology to Databricks, Travinto Technologies' Code Converter Tool is here to make the process seamless. This powerful tool enables you to migrate data, ETL workflows, and reports across platf...
- 1807 Views
- 1 replies
- 0 kudos
Hi Team,I accidentally deleted our databricks workspace, which had all our artefacts and control plane, and was the primary resource for our team's working environment.Could anyone please help on priority, regarding the recovery/ restoration mechanis...
- 1807 Views
- 1 replies
- 0 kudos
Latest Reply
Hello data-warriors
Note: Users cannot recover the deleted Databricks instance directly from Azure Portal.
Deleted databricks instance can be only recovered by opening a support ticket, where our core engineering team will help you to recover the Da...
- 325 Views
- 2 replies
- 0 kudos
We have a helper function that uses a sklearn estimator. We don't want to to be logged to mlflow.I can dodef myfunc(): import mlflow with mlflow.autolog.ignore: # train model # use model return predictionsBut I get info prints:...
- 325 Views
- 2 replies
- 0 kudos
Latest Reply
mlflow.autolog(disable=True, silent=True) fixes the printing. But my other problem with setting autologging back to previous state is still unsolved. I can't find any information about that problem in the docs.
1 More Replies
- 1446 Views
- 6 replies
- 0 kudos
Hi,I am getting below error while I am deploying databricks bundle using azure devops release 2024-07-07T03:55:51.1199594Z Error: terraform init: exec: "xxxx\\.databricks\\bundle\\dev\\terraform\\xxxx\\.databricks\\bundle\\dev\\bin\\terraform.exe": ...
- 1446 Views
- 6 replies
- 0 kudos
Latest Reply
Using Git Bash for bash execution and to setup variables we did this by going to control panel -> system -> environment varibales
5 More Replies
- 39145 Views
- 8 replies
- 1 kudos
Hello, On Azure Databricks i'm trying to remove a folder on the Repos folder using the following command : databricks workspace delete "/Repos/xxx@xx.com"I got the following error message:databricks workspace delete "/Repos/xxxx@xx.com"Error: Folder ...
- 39145 Views
- 8 replies
- 1 kudos
- 1724 Views
- 3 replies
- 0 kudos
I want to copy a table from a foreign catalog as my streaming table. This is the code I used but I am getting error: Table table_name does not support either micro-batch or continuous scan.; spark.readStream .table(table_name) ...
- 1724 Views
- 3 replies
- 0 kudos
Latest Reply
What is the underlying type of the table you are trying to stream from? Structured Streaming does not currently support streaming reads via JDBC, so reading from MySQL, Postgres, etc are not supported.
If you are trying to perform stream ingestion fr...
2 More Replies
- 361 Views
- 3 replies
- 0 kudos
Hello Team,My Data engineer associate exam got suspended within seconds of starting without any reason. After starting the exam screen got paused just after 10-20 seconds there was a notice that someone will contact you but no one contacted till the ...
- 361 Views
- 3 replies
- 0 kudos
Latest Reply
You need to raise a ticket with Databricks to get to a resolution:
https://help.databricks.com/s/contact-us?ReqType=training%22%20%5Ct%20%22_blank
2 More Replies
- 271 Views
- 1 replies
- 0 kudos
How can I use column for liquid clustering that is not in first 32 column of my delta table schema.
- 271 Views
- 1 replies
- 0 kudos
Latest Reply
We can only specify columns with statistics collected for clustering keys. By default, the first 32 columns in a Delta table have statistics collected. See Specify Delta statistics columns.
We can use the below workaround for your use case:
1. Use th...
- 767 Views
- 1 replies
- 0 kudos
How easy to migrate from snowflake or redshift to Databricks ?
- 767 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @RajPutta : it very easy to migrate any thing to Databrick only thing required is your team knowledge on databrick platform . below steps is important.DiscoveryAssessment Code conversion and Migration is very importPoCif you want to migrate fro...
- 3261 Views
- 2 replies
- 2 kudos
Any plans for adding c# connector? What are alternate ways in current state?
- 3261 Views
- 2 replies
- 2 kudos
Latest Reply
I'm having problems getting the REST API calls for Delta Sharing to work. Python and Power BI work fine but the C# code that Databricks AI generates does not work. I keep getting an "ENDPOINT NOT FOUND" error even though config.share is fine.A C# con...
1 More Replies
- 1620 Views
- 1 replies
- 0 kudos
I've got the following questions:1. Can I pause autoloader jobs, delete cluster that was used to run these jobs, create a new cluster and run jobs with newer version cluster?2. I have one autoloader job that ingests JSONs and transforms this to a del...
- 1620 Views
- 1 replies
- 0 kudos
Latest Reply
Hello,
1.Yes you can pause the job, delete the cluster, upgrade versions of the cluster, etc. With Auto Loader and Structured Streaming the important thing is making sure that the checkpointLocation stays in tact, so no deletions, modifications, or m...
- 1827 Views
- 2 replies
- 0 kudos
Hi Expert,here is sql server scalar function how to convert in databricks functionSQLCREATE function [dbo].[gettrans](@PickupCompany nvarchar(2),@SupplyCountry int, @TxnSource nvarchar(10),@locId nvarchar(50), @ExternalSiteId nvarchar(50))RETURNS INT...
- 1827 Views
- 2 replies
- 0 kudos
Latest Reply
Hello @Shree23 ,In Databricks, you can create scalar or tabular functions using SQL or Python. Here is the documentation .I converted your SQL Server function to Databricks standards.
CREATE OR REPLACE FUNCTION gettrans(
PickupCompany STRING,
Sup...
1 More Replies
- 1958 Views
- 2 replies
- 0 kudos
Hello All,I'm new with Databricks,Have an issue within enable system schemas. When run api call to check system schemas status in metastores -I see that all schemas in "Unavailable" state (except "information_schema", which is "ENABLE_COMPLETED").Is ...
- 1958 Views
- 2 replies
- 0 kudos
by
Phani1
• Valued Contributor II
- 5183 Views
- 4 replies
- 1 kudos
Hi Team,Could you please suggest how to raise the service now ticket, in case of Databricks job failure?Regards ,Phanindra
- 5183 Views
- 4 replies
- 1 kudos
Latest Reply
Hi , Can this JSON response to Service Now be edited before being sent? What are the different ways it can be edited?
3 More Replies
- 5221 Views
- 8 replies
- 2 kudos
HI Looking for suggestiongs to stream on demand data from databricks delta tables to salesforce.Is odata a good option?
- 5221 Views
- 8 replies
- 2 kudos
Latest Reply
Hey, I think this might helphttps://www.salesforce.com/uk/news/press-releases/2024/04/25/zero-copy-partner-network/
7 More Replies