I have zip file on SFTP location. I want to copy that file from SFTP location and put it into Azure Data lake and want to unzip there using spark notebook. Please help me to solve this.
Hi DB Support,Can we use DB's Delta Lake as our Target DB? Here's our situation...We have hundreds of ETL jobs pulling from these Sources. (SAP, Siebel/Oracle, Cognos, Postgres) .Our ETL Process has all of the logic and our Target DB is an MPP syst...
Hi yes you can the best is to create sql endpoint in premium workspace and just write to delta lake as to sql. This is community forum not support. You can contact databricks via https://databricks.com/company/contact or via AWS, Azure if you have su...
I've created a simple query reading all columns from a table. I've published the results on a dashboard, however I receive the following error. I cannot seem to find any info online on how to resolve this issueAny ideas?
I recently posted this in Stack Overflow. I'm using R in Databricks. R Studio runs fine and executes from the Databricks cluster. I would like to transition from R Studio to notebooks. When I start the cluster, R seems to run fine from notebooks. ...
@Paul Evangelista​ - Thank you for letting us know. You did great!Would you be happy to mark your answer as best so that others can find your solution more easily?
Hello,When I run this code :CREATE DATABASE BackOfficeI see the database like this :backofficeWhy everything is in lower case ?Is it possible to configure Databricks in order to keep the real name ?Thanks.
It is managed by hive metastore as you can put it in different databases is saver this way as some database are Case Sensitive and some not (you can easily test it with standard WHERE syntax).Probably you could change it with some hive settings but i...
Hi @Borislav Blagoev​ ,Thanks very much for taking the time to collect these logs.The problem here (as indicated by the `IpAclValidation` message) is that IP allow listing (enabled for your workspace) will not allow arbitrary connections from Spark c...
I have a pipeline with + 20 streams running based on autoloader. The pipeline crashed and after the crash I'm unable to start the streams and they fail with one of the following messages:1): The metadata file in the streaming source checkpoint direct...
Online IT Training: ERP/SAP Online Training | JAVA Online Training | C++Online Training | ORACLE Online Training | Online Python Training | Machine Learning Training. If you Need more Details and Information Regarding IT Online Training. Please Visi...
I'm having difficulty with a job (parent) that triggers multiple parallel runs of another job (child) in batches (e.g. 10 parallel runs per batch).Occasionally some of the parallel "child" jobs will crash a few minutes in-- either during or immediate...
It is MariaDB JDBC error so probably database which you are trying to connect can not handle this amount of concurrent connections (alternatively if you are not connecting to MariaDB databse, MariaDB is used also for hive metastore in your case maria...
Hi @Bhagwan Chaubey​ ,There is Spark developer certification from Databricks - https://databricks.com/learn/training/home (and some higher levels as well)In Azure databricks is included in DP-100 and DP-203 certification (together with around 10 diff...
Hi All,I am working on a requirement where I need to calculate the cost of each spark job individually on a shared Azure/AWS Databricks cluster. There can be multiple jobs running on the cluster parallelly.Cost needs to be calculated after job comple...
Hi,I'm working for Couchbase on the Couchbase Spark Connector and noticed something weird which I haven't been able to get to the bottom of so far.For query DataFrames we use the Datasource v2 API and we delegate the JSON parsing to the org.apache.sp...
Since there hasn't been any progress on this for over a month, I applied a workaround and copied the classes into the connector source code so we don't have to rely on the databricks classloader. It seems to work in my testing and will be released wi...
I'm using Azure Databricks Python notebooks. We are preparing a front end to display the Databricks tables via API to query the tables. Is there a solution from Databricks to host callable APIs for querying its table and sending it as response to fro...
@Prabakar Ammeappin​ Thanks for the linkAlso was wondering for web page front end will it be more effective to query from SQL Database or from Azure Databricks tables. If from Azure SQL database, is there any efficient way to sync the tables from Az...
Hello:I am new to databricks and need little help on Delta Table creation.I am having great difficulty to understand creating of delta table and they are:-Do I need to create S3 bucket for Delta Table? If YES then do I have to mount on the mountpoint...
Hi Jay,I would suggest to start with creating managed delta table. please run a simple commandCREATE TABLE events(id long) USING DELTAThis will create a managed delta table called "events"Then perform %sql describe extended eventsThe above command ...
HI, I'm interested to know if multiple executors to append the same hive table using saveAsTable or insertInto sparksql. will that cause any data corruption? What configuration do I need to enable concurrent write to same hive table? what about the s...