cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Chaitanya_Raju
by Honored Contributor
  • 1780 Views
  • 4 replies
  • 0 kudos

Creating new group

Can someone help me by providing steps for creating a new group, as I could not able to find it anywhere? Actually, I wanted to create a new group for Hyderabad, India which I could not able to find in the Groups sections.@Kaniz Fatma​ @Sujitha Ramam...

  • 1780 Views
  • 4 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi, To request a Group be created, please fill out the form linked, and the Community Team will be in touch in 3 business days.   

  • 0 kudos
3 More Replies
Siravich
by New Contributor
  • 341 Views
  • 0 replies
  • 0 kudos

Permission on Unity catalog

I am facing an issue when assign permission on view created on unity catalog. The problem is I had create a user defined function (UDFs) in order to encrypt sensitive column, I create a view which call the functions and source table within the catalo...

  • 341 Views
  • 0 replies
  • 0 kudos
glebex
by New Contributor II
  • 5349 Views
  • 8 replies
  • 8 kudos

Resolved! Accessing workspace files within cluster init script

Greetings all!I am currently facing an issue while accessing workspace files from the init script.As it was explained in the documentation, it is possible to place init script inside workspace files (link). This works perfectly fine and init script i...

  • 5349 Views
  • 8 replies
  • 8 kudos
Latest Reply
jacob_hill_prof
New Contributor II
  • 8 kudos

@Gleb Smolnik​ You might also want to try cloning a github repo in your init script and then storing dependencies like requirements.txt files and other init scripts there. By doing this you can pull a whole slew of init scripts to be utilized in your...

  • 8 kudos
7 More Replies
Raviiit
by New Contributor II
  • 2002 Views
  • 4 replies
  • 5 kudos

Resolved! spark managed tables

Hi, I recently started learning about spark.  I was studying about spark managed tables. so as per docs " spark manages the both the data and metadata". Assume that i have a csv file in s3 and I read it into data frame like below.df = spark.read .for...

  • 2002 Views
  • 4 replies
  • 5 kudos
Latest Reply
Tharun-Kumar
Honored Contributor II
  • 5 kudos

Yes, @Raviiit DBFS (Databricks File System) is a distributed file system used by Databricks clusters. DBFS is an abstraction layer over cloud storage (e.g. S3 or Azure Blob Store), allowing external storage buckets to be mounted as paths in the DBFS ...

  • 5 kudos
3 More Replies
databicky
by Contributor II
  • 3163 Views
  • 5 replies
  • 0 kudos

File copy in adls

i am using dbutils.fs.copy(abfss://container/provsn/filen[ame.txt,abfss://container/data/sasam.txt)while.trying this copy method to copy the files it is showing urisyntax exception near the square bracket how can i read and  copy it

  • 3163 Views
  • 5 replies
  • 0 kudos
Latest Reply
dplante
Contributor II
  • 0 kudos

From looking at stack trace, it looks like URIException.  Easiest solution would be renaming the file so that there are no square brackets in the filename.  If this is not an option, it might help to URLEncode the path - https://stackoverflow.com/que...

  • 0 kudos
4 More Replies
brickster
by New Contributor II
  • 2315 Views
  • 3 replies
  • 2 kudos

Passing values between notebook tasks in Workflow Jobs

I have created a Databricks workflow job with notebooks as individual tasks sequentially linked. I assign a value to a variable in one notebook task (ex: batchid = int(time.time()). Now, I want to pass this batchid variable to next notebook task.What...

  • 2315 Views
  • 3 replies
  • 2 kudos
Latest Reply
fijoy
Contributor
  • 2 kudos

@brickster You would use dbutils.jobs.taskValues.set() and dbutils.jobs.taskValues.get().See docs for more details: https://docs.databricks.com/workflows/jobs/share-task-context.html

  • 2 kudos
2 More Replies
Enzo_Bahrami
by New Contributor III
  • 3071 Views
  • 6 replies
  • 1 kudos

Resolved! On-Premise SQL Server Ingestion to Databricks Bronze Layer

Hello everyone!So I want to ingest tables with schemas from the on-premise SQL server to Databricks Bronze layer with Delta Live Table and I want to do it using Azure Data Factory and I want the load to be a Snapshot batch load, not an incremental lo...

  • 3071 Views
  • 6 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Parsa Bahraminejad​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best an...

  • 1 kudos
5 More Replies
jhgorse
by New Contributor III
  • 744 Views
  • 0 replies
  • 0 kudos

mqtt to Delta Live Table

Greetings,I see that Delta Live Tables has various real-time connectors such as Kafka, Kinesis, Google's Pub Sub, and so on. I also see that Apache had maintained an mqtt connector to Spark through the 2.x series called Bahir, but dropped it in versi...

  • 744 Views
  • 0 replies
  • 0 kudos
chorongs
by New Contributor III
  • 3264 Views
  • 4 replies
  • 3 kudos

Resolved! I have a question about the VACUUM feature!

History is piled up as aboveFor testing, I want to erase the history of the table with the VACUUM command."set spark.databricks.delta.retentionDurationCheck.After the option "enabled = False" was given, the command "VACUUM del_park retain 0 hours;" w...

chorongs_0-1688456804185.png
  • 3264 Views
  • 4 replies
  • 3 kudos
Latest Reply
Vinay_M_R
Valued Contributor II
  • 3 kudos

Executing VACUUM performs garbage cleanup on the table directory. By default, a retention threshold of 7 days will be enforced. Please follow the below steps to perform VACCUM: 1.) SET spark.databricks.delta.retentionDurationCheck.enabled false; This...

  • 3 kudos
3 More Replies
kll
by New Contributor III
  • 284 Views
  • 0 replies
  • 0 kudos

Mosaic's grid_boundary method returns inconsistent geometries

I am applying mosaic's `grid_boundary` method on a spark DataFrame containing a set of `h3_hex_ids`. The geometries returned are not consistent. i.e they could be either `lat, long` or `long, lat`.Here's a sample data```import pyspark.sql.functions a...

Data Engineering
geospatial
mosaic
  • 284 Views
  • 0 replies
  • 0 kudos
442027
by New Contributor II
  • 2561 Views
  • 2 replies
  • 3 kudos

Resolved! Delta Log checkpoints not being created?

It is mentioned in the delta protocol that checkpoints for delta tables are created every 10 commits - however when I modify a table after >10 separate operations (producing >10 separate json files in the _delta_log directory), no checkpoint files ar...

  • 2561 Views
  • 2 replies
  • 3 kudos
Latest Reply
Vinay_M_R
Valued Contributor II
  • 3 kudos

 As the latest update now checkpointing of delta tables are created for every 100 commits. This is done for some improvement purpose.If you want to have a checkpoint file for delta table for every 10 commits or after any desired commits. You can cust...

  • 3 kudos
1 More Replies
Vsleg
by Contributor
  • 2309 Views
  • 5 replies
  • 3 kudos

Resolved! Issue with Apache Sparkâ„¢ Programming with Databricks course

Hello,I found an issue with the Apache Sparkâ„¢ Programming with Databricks courses on Databricks Academy when trying to do the labs. The mount that the courses use for training data is failing with what looks to me like an authentication issue (see sc...

image
  • 2309 Views
  • 5 replies
  • 3 kudos
Latest Reply
Vsleg
Contributor
  • 3 kudos

I found the course Git Repo at (https://github.com/databricks-academy/apache-spark-programming-with-databricks-english), this works so using that instead of the 'apache-spark-programming-with-databricks.dbc' file available in the learning portal. #DA...

  • 3 kudos
4 More Replies
ah0896
by New Contributor III
  • 6623 Views
  • 17 replies
  • 10 kudos

Using init scripts on UC enabled shared access mode clusters

I know that UC enabled shared access mode clusters do not allow init script usage and I have tried multiple workarounds to use the required init script in the cluster(pyodbc-install.sh, in my case) including installing the pyodbc package as a workspa...

  • 6623 Views
  • 17 replies
  • 10 kudos
Latest Reply
karthik_p
Esteemed Contributor
  • 10 kudos

@Anonymous @Kaniz can anyone form databricks confirm on above issue please, there seems to be bit conflict on using custom scripts support  on shared access mode cluster with unity catalog enabled please

  • 10 kudos
16 More Replies
Labels
Top Kudoed Authors