- 1011 Views
- 3 replies
- 0 kudos
Hi @Kaniz,We have a kafka source appending the data into bronze table and a subsequent DLT apply changes into to do the SCD handling. Finally, we have materialized views to create dims/facts.We are facing issues, when we perform deduplication inside ...
- 1011 Views
- 3 replies
- 0 kudos
Latest Reply
Hi @Palash01 Thanks for the response. Below is what I am trying to do. However, it is throwing an error. APPLY CHANGES INTO LIVE.targettable
FROM ( SELECT DISTINCT *
FROM STREAM(sourcetable_1) tbl1
INNER JOIN STREAM(sourcetable_2) tbl2 ON tbl1.id = ...
2 More Replies
- 829 Views
- 1 replies
- 0 kudos
I have created a job in Databricks and configured to use a cluster having single user access enabled and using github as a source. When I am trying to run the job, getting following error-run failed with error message Unable to access the notebook "d...
- 829 Views
- 1 replies
- 0 kudos
Latest Reply
ezhil
New Contributor III
I think you need to link the git account with databricks by passing the access token which is generated in githubFollow the document for reference: https://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.htmlNote : While creating the...
- 687 Views
- 5 replies
- 2 kudos
HiOriginally, I only have 1 pipeline looking to a directory. Now as a test, I cloned the existing pipeline and edited the settings to a different catalog. Now both pipelines is basically reading the same directory path and running continuous mode.Que...
- 687 Views
- 5 replies
- 2 kudos
Latest Reply
Hi @Gilg, When multiple pipelines are simultaneously accessing the same directory path and utilizing Autoloader in continuous mode, it is crucial to consider the management of file locks and data consistency carefully.
Let's delve into the specifi...
4 More Replies
by
cltj
• New Contributor III
- 539 Views
- 1 replies
- 0 kudos
Hi all. I want to get this right and therefore I am reaching out to the community. We are using azure, and currently are using 1 Azure Data Lake Storage for development, and 1 for production. These are connected to dev and prod databricks workspaces....
- 539 Views
- 1 replies
- 0 kudos
Latest Reply
I recommend you read this article (Managed vs External tables) and answer the following questions:do I require direct access to the data outside of Azure Databricks clusters or Databricks SQL warehouses?If yes, then External is your only optionIn rel...
- 502 Views
- 2 replies
- 0 kudos
Hello,I have some trouble with AutoLoader. Currently we use many diffrent source location on ADLS to read parquet files and write it to delta table using AutoLoader. Files in locations have the same schema.Every things works fine untill we have to ad...
- 502 Views
- 2 replies
- 0 kudos
Latest Reply
Thanks for the reply @Kaniz . I have some questions related to you answer.Checkpoint Location:Does deleteing checkpoint folder (or only files?) mean that next run of AutoLoader will load all files from provided source locations? So it will duplicate ...
1 More Replies
- 816 Views
- 2 replies
- 0 kudos
I created on our dev environment a cluster using the shared access mode, for our devs to use (instead of separate single user clusters).What I notice is that the performance of this cluster is terrible. And I mean really terrible: notebook cells wit...
- 816 Views
- 2 replies
- 0 kudos
Latest Reply
Thanks for the answer!It seems that using shared access mode adds overhead. The nodes/driver are not stressed at all (cpu/ram/network).We use UC only.The clusters seems configured correctly (using the same cluster in single user mode changes perform...
1 More Replies
by
essura
• New Contributor II
- 610 Views
- 2 replies
- 1 kudos
Hi there,We are trying to setup up a docker image for our dbt execution, primarily to improve execution speed, but also to simplify deployment (we are using a private repos for both the dbt project and some of the dbt packages).It seems to work curre...
- 610 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @essura, Setting up a Docker image for your dbt execution is a great approach.
Let’s dive into the details.
Prebuilt Docker Images:
dbt Core and all adapter plugins maintained by dbt Labs are available as Docker images. These images are distr...
1 More Replies
by
Innov
• New Contributor
- 482 Views
- 1 replies
- 0 kudos
Looking for some help. If anyone has worked with nested json file in Databricks notebook. I am trying to parse nested json file to get coordinates and use that to create polygon for footprint. Do I need to read it as txt? How can I use the Databricks...
- 482 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @Innov, Working with nested JSON files in Databricks Notebooks is a common task, and I can guide you through the process.
Let’s break it down step by step:
Reading the Nested JSON File:
You don’t need to read the JSON file as plain text (.txt...
- 483 Views
- 1 replies
- 1 kudos
i am trying to create 2 streaming tables in one DLT pipleine , both read json data from different locations and both have different schema , the pipeline executes but no data is inserted in both the tables.where as when i try to run each table indiv...
- 483 Views
- 1 replies
- 1 kudos
Latest Reply
Hi @zero234, It seems you’re encountering an issue with your Delta Live Tables (DLT) pipeline where you’re trying to create two streaming tables from different sources with distinct schemas.
Let’s dive into this!
DLT is a powerful feature in Data...
- 538 Views
- 1 replies
- 0 kudos
I am creating cluster using rest api call but every-time it is creating all purpose cluster. Is there a way to create job cluster and run notebook using python code?
- 538 Views
- 1 replies
- 0 kudos
Latest Reply
job_cluster_key
string
[ 1 .. 100 ] characters
^[\w\-\_]+$
If job_cluster_key, this task is executed reusing the cluster specified in job.settings.job_clusters.Create a new job | Jobs API | REST API reference | Databricks on AWS
- 399 Views
- 1 replies
- 1 kudos
Hi guys,You have any idea how can I do a groupBy without aggregation (Pyspark API)like: df.groupBy('field1', 'field2', 'field3') My target is make a group but in this case is not necessary count records or aggregationThank you
- 399 Views
- 1 replies
- 1 kudos
Latest Reply
df.select("field1","field2","field3").distinct()do you mean get distinct rows for selected column?
- 424 Views
- 1 replies
- 0 kudos
So i have this nested data with more than 200+columns and i have extracted this data into json file when i use the below code to read the json files, if in data there are few columns which have no value at all it doest inclued those columns in schema...
- 424 Views
- 1 replies
- 0 kudos
Latest Reply
replying to my above questionwe cannot use inferschema on streaming table we need to externally specify schema can anyone please suggest a way to write data in nested form to streaming table and if this is possible?
- 1337 Views
- 3 replies
- 1 kudos
I am trying to connect AS Cube with Databricks notebook but unfortunately didn't find any solution yet. is there any possible way to connect AS cube with databricks notebook? if yes can someone please guide me
- 1337 Views
- 3 replies
- 1 kudos
Latest Reply
I am able to connect Azure analysis services using Azure Analysis services rest api. is yours on-prem?
2 More Replies
- 2425 Views
- 4 replies
- 5 kudos
Hi, everyone. I just recently started using Databricks on Azure so my question is probably very basic but I am really stuck right now.I need to capture some streaming metrics (number of input rows and their time) so I tried using the Spark Rest Api ...
- 2425 Views
- 4 replies
- 5 kudos
Latest Reply
hi @Roberto Baldrez​ ,if you think that @Gaurav Rupnar​ solved your question, then please select it as best response to it can be moved to the top of the topic and it will help more users in the future.Thank you
3 More Replies
- 749 Views
- 2 replies
- 2 kudos
I have created a DLT pipeline which reads data from json files which are stored in databricks volume and puts data into streaming table This was working fine.when i tried to read the data that is inserted into the table and compare the values with t...
- 749 Views
- 2 replies
- 2 kudos
Latest Reply
Keep your DLT code separate from your comparison code, and run your comparison code once your DLT data has been ingested.
1 More Replies