cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

sumitdesai
by New Contributor II
  • 1680 Views
  • 1 replies
  • 0 kudos

Job not able to access notebook from github

I have created a job in Databricks and configured to use a cluster having single user access enabled and using github as a source. When I am trying to run the job, getting following error-run failed with error message Unable to access the notebook "d...

  • 1680 Views
  • 1 replies
  • 0 kudos
Latest Reply
ezhil
New Contributor III
  • 0 kudos

I think you need to link the git account with databricks by passing the access token which is generated in githubFollow the document for reference: https://docs.databricks.com/en/repos/get-access-tokens-from-git-provider.htmlNote : While creating the...

  • 0 kudos
Gilg
by Contributor II
  • 1271 Views
  • 5 replies
  • 2 kudos

Multiple Autoloader reading the same directory path

HiOriginally, I only have 1 pipeline looking to a directory. Now as a test, I cloned the existing pipeline and edited the settings to a different catalog. Now both pipelines is basically reading the same directory path and running continuous mode.Que...

  • 1271 Views
  • 5 replies
  • 2 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 2 kudos

Hi @Gilg, When multiple pipelines are simultaneously accessing the same directory path and utilizing Autoloader in continuous mode, it is crucial to consider the management of file locks and data consistency carefully.    Let's delve into the specifi...

  • 2 kudos
4 More Replies
cltj
by New Contributor III
  • 1072 Views
  • 1 replies
  • 0 kudos

Managed tables and ADLS - infrastructure

Hi all. I want to get this right and therefore I am reaching out to the community. We are using azure, and currently are using 1 Azure Data Lake Storage for development, and 1 for production. These are connected to dev and prod databricks workspaces....

  • 1072 Views
  • 1 replies
  • 0 kudos
Latest Reply
ossinova
Contributor II
  • 0 kudos

I recommend you read this article (Managed vs External tables) and answer the following questions:do I require direct access to the data outside of Azure Databricks clusters or Databricks SQL warehouses?If yes, then External is your only optionIn rel...

  • 0 kudos
Marcin_U
by New Contributor II
  • 919 Views
  • 2 replies
  • 0 kudos

AutoLoader - problem with adding new source location

Hello,I have some trouble with AutoLoader. Currently we use many diffrent source location on ADLS to read parquet files and write it to delta table using AutoLoader. Files in locations have the same schema.Every things works fine untill we have to ad...

  • 919 Views
  • 2 replies
  • 0 kudos
Latest Reply
Marcin_U
New Contributor II
  • 0 kudos

Thanks for the reply @Kaniz_Fatma . I have some questions related to you answer.Checkpoint Location:Does deleteing checkpoint folder (or only files?) mean that next run of AutoLoader will load all files from provided source locations? So it will dupl...

  • 0 kudos
1 More Replies
-werners-
by Esteemed Contributor III
  • 1561 Views
  • 2 replies
  • 0 kudos

performance issues using shared compute access mode in scala

I created on our dev environment a cluster using the shared access mode, for our devs to use (instead of separate single user clusters).What I notice is that the performance of this cluster is terrible.  And I mean really terrible: notebook cells wit...

  • 1561 Views
  • 2 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

Thanks for the answer!It seems that using shared access mode adds overhead.  The nodes/driver are not stressed at all (cpu/ram/network).We use UC only.The clusters seems configured correctly (using the same cluster in single user mode changes perform...

  • 0 kudos
1 More Replies
essura
by New Contributor II
  • 1154 Views
  • 2 replies
  • 1 kudos

Create a docker image for dbt task

Hi there,We are trying to setup up a docker image for our dbt execution, primarily to improve execution speed, but also to simplify deployment (we are using a private repos for both the dbt project and some of the dbt packages).It seems to work curre...

  • 1154 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

Hi @essura, Setting up a Docker image for your dbt execution is a great approach. Let’s dive into the details. Prebuilt Docker Images: dbt Core and all adapter plugins maintained by dbt Labs are available as Docker images. These images are distr...

  • 1 kudos
1 More Replies
Innov
by New Contributor
  • 730 Views
  • 1 replies
  • 0 kudos

Parse nested json for building footprints

Looking for some help. If anyone has worked with nested json file in Databricks notebook. I am trying to parse nested json file to get coordinates and use that to create polygon for footprint. Do I need to read it as txt? How can I use the Databricks...

  • 730 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Innov, Working with nested JSON files in Databricks Notebooks is a common task, and I can guide you through the process. Let’s break it down step by step: Reading the Nested JSON File: You don’t need to read the JSON file as plain text (.txt...

  • 0 kudos
zero234
by New Contributor III
  • 901 Views
  • 1 replies
  • 1 kudos

Data is not loaded when creating two different streaming table from one delta live table pipeline

 i am trying to create 2 streaming tables in one DLT pipleine , both read json data from different locations and both have different schema , the pipeline executes but no data is inserted in both the tables.where as when i try to run each table indiv...

Data Engineering
dlt
spark
STREAMINGTABLE
  • 901 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

Hi @zero234, It seems you’re encountering an issue with your Delta Live Tables (DLT) pipeline where you’re trying to create two streaming tables from different sources with distinct schemas. Let’s dive into this! DLT is a powerful feature in Data...

  • 1 kudos
vijaykumar99535
by New Contributor III
  • 865 Views
  • 1 replies
  • 0 kudos

How to create job cluster using rest api

I am creating cluster using rest api call but every-time it is creating all purpose cluster. Is there a way to create job cluster and run notebook using python code?

  • 865 Views
  • 1 replies
  • 0 kudos
Latest Reply
feiyun0112
Contributor III
  • 0 kudos

job_cluster_key string [ 1 .. 100 ] characters ^[\w\-\_]+$ If job_cluster_key, this task is executed reusing the cluster specified in job.settings.job_clusters.Create a new job | Jobs API | REST API reference | Databricks on AWS

  • 0 kudos
William_Scardua
by Valued Contributor
  • 870 Views
  • 1 replies
  • 1 kudos

groupBy without aggregation (Pyspark API)

Hi guys,You have any idea how can I do a groupBy without aggregation (Pyspark API)like: df.groupBy('field1', 'field2', 'field3') My target is make a group but in this case is not necessary count records or aggregationThank you  

  • 870 Views
  • 1 replies
  • 1 kudos
Latest Reply
feiyun0112
Contributor III
  • 1 kudos

df.select("field1","field2","field3").distinct()do you mean get distinct rows for selected column?

  • 1 kudos
zero234
by New Contributor III
  • 602 Views
  • 1 replies
  • 0 kudos

I am trying to read nested data from json file to put it into streaming table using dlt

So i have this nested data with more than 200+columns and i have extracted this data into json file when i use the below code to read the json files, if in data there are few columns which have no value at all it doest inclued those columns in schema...

  • 602 Views
  • 1 replies
  • 0 kudos
Latest Reply
zero234
New Contributor III
  • 0 kudos

replying to my above questionwe cannot use inferschema on streaming table we need to externally specify schema can anyone please suggest a way to write data in nested form to streaming table and if this is possible?

  • 0 kudos
asad77007
by New Contributor II
  • 1847 Views
  • 3 replies
  • 1 kudos

How to connect Analysis Service Cube with Databricks notebook

I am trying to connect AS Cube with Databricks notebook but unfortunately didn't find any solution yet. is there any possible way to connect AS cube with databricks notebook? if yes can someone please guide me

  • 1847 Views
  • 3 replies
  • 1 kudos
Latest Reply
omfspartan
New Contributor III
  • 1 kudos

I am able to connect Azure analysis services using Azure Analysis services rest api. is yours on-prem?

  • 1 kudos
2 More Replies
Baldrez
by New Contributor II
  • 3002 Views
  • 4 replies
  • 5 kudos

Resolved! REST API for Stream Monitoring

Hi, everyone. I just recently started using Databricks on Azure so my question is probably very basic but I am really stuck right now.I need to capture some streaming metrics (number of input rows and their time) so I tried using the Spark Rest Api ...

  • 3002 Views
  • 4 replies
  • 5 kudos
Latest Reply
jose_gonzalez
Moderator
  • 5 kudos

hi @Roberto Baldrez​ ,if you think that @Gaurav Rupnar​ solved your question, then please select it as best response to it can be moved to the top of the topic and it will help more users in the future.Thank you

  • 5 kudos
3 More Replies
zero234
by New Contributor III
  • 1449 Views
  • 2 replies
  • 2 kudos

I have created a DLT pipeline which  reads data from json files which are stored in databricks volum

I have created a DLT pipeline which  reads data from json files which are stored in databricks volume and puts data into streaming table This was working fine.when i tried to read the data that is inserted into the table and compare the values with t...

  • 1449 Views
  • 2 replies
  • 2 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 2 kudos

Keep your DLT code separate from your comparison code, and run your comparison code once your DLT data has been ingested.

  • 2 kudos
1 More Replies
Avinash_Narala
by Contributor
  • 846 Views
  • 1 replies
  • 1 kudos

Unity Catalog Migration

Hello,We are in the process of migrating to Unity Catalog. So, can I know how to automate the process of Refactoring the Notebooks to Unity Catalog.

Data Engineering
automation
migration
unitycatalog
  • 846 Views
  • 1 replies
  • 1 kudos
Latest Reply
MinThuraZaw
New Contributor III
  • 1 kudos

Hi @Avinash_Narala There is no one-click solution to refactor all table names notebooks with UC's three level namespaces. At least, manual updating table names is required during the migration process.One option is you can you search feature. Search ...

  • 1 kudos
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!

Labels
Top Kudoed Authors