cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

aaronpetry
by New Contributor III
  • 2557 Views
  • 2 replies
  • 3 kudos

%run not printing notebook output when using 'Run All' command

I have been using the %run command to run auxiliary notebooks from an "orchestration" notebook. I like using %run over dbutils.notebook.run because of the variable inheritance, troubleshooting ease, and the printing of the output from the auxiliary n...

  • 2557 Views
  • 2 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Aaron Petry​ Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else bricksters will get back to you soon. Thanks

  • 3 kudos
1 More Replies
Nayan7276
by Valued Contributor II
  • 1995 Views
  • 5 replies
  • 29 kudos

Resolved! databricks community

I have points in databricks community 461 but in reward store only reflecting 23 points can any one look into this issue

  • 1995 Views
  • 5 replies
  • 29 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 29 kudos

Hi rewards account needs to be created with same email id and points may take a week to reflect in your rewards account

  • 29 kudos
4 More Replies
isaac_gritz
by Valued Contributor II
  • 1715 Views
  • 4 replies
  • 8 kudos

Databricks Runtime Support

How Long are Databricks runtimes supported for? How often are they updated?You can learn more about the Databricks runtime support lifecycle here (AWS | Azure | GCP).Long Term Support (LTS) runtimes are released every 6 months and supported for 2 yea...

  • 1715 Views
  • 4 replies
  • 8 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 8 kudos

Thanks for update

  • 8 kudos
3 More Replies
Saikrishna2
by New Contributor III
  • 4411 Views
  • 7 replies
  • 11 kudos

Data bricks SQL is allowing 10 queries only ?

•Power BI is a publisher that uses AD group authentication to publish result sets. Since the publisher's credentials are maintained, the same user can access the data bricks database.•Number of the users are retrieving the data from the power bi or i...

  • 4411 Views
  • 7 replies
  • 11 kudos
Latest Reply
VaibB
Contributor
  • 11 kudos

I believe 10 is a limit as of now. See if you can increase the concurrency limit from the source.

  • 11 kudos
6 More Replies
User16835756816
by Valued Contributor
  • 3014 Views
  • 4 replies
  • 11 kudos

How can I extract data from different sources and transform it into a fresh, reliable data pipeline?

Tip: These steps are built out for AWS accounts and workspaces that are using Delta Lake. If you would like to learn more watch this video and reach out to your Databricks sales representative for more information.Step 1: Create your own notebook or ...

  • 3014 Views
  • 4 replies
  • 11 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 11 kudos

Thanks @Nithya Thangaraj​ 

  • 11 kudos
3 More Replies
him
by New Contributor III
  • 11356 Views
  • 8 replies
  • 5 kudos

i am getting the below error while making a GET request to job in databrick after successfully running it

"error_code": "INVALID_PARAMETER_VALUE",  "message": "Retrieving the output of runs with multiple tasks is not supported. Please retrieve the output of each individual task run instead."}

Capture
  • 11356 Views
  • 8 replies
  • 5 kudos
Latest Reply
SANKET
New Contributor II
  • 5 kudos

Use https://<databricks-instance>/api/2.1/jobs/runs/get?run_id=xxxx."get-output" gives the details of single run id which is associated with the task but not the Job.

  • 5 kudos
7 More Replies
chhavibansal
by New Contributor III
  • 2329 Views
  • 4 replies
  • 1 kudos

ANALYZE TABLE showing NULLs for all statistics in Spark

var df2 = spark.read .format("csv") .option("sep", ",") .option("header", "true") .option("inferSchema", "true") .load("src/main/resources/datasets/titanic.csv")   df2.createOrReplaceTempView("titanic") spark.table("titanic").cach...

  • 2329 Views
  • 4 replies
  • 1 kudos
Latest Reply
chhavibansal
New Contributor III
  • 1 kudos

can you share what the *newtitanic* is I think that you would have done something similarspark.sql("create table newtitanic as select * from titanic")something like this works for me, but the issue is i first make a temp view then again create a tab...

  • 1 kudos
3 More Replies
Jain
by New Contributor III
  • 1678 Views
  • 1 replies
  • 0 kudos

How to install GDAL on Databricks Cluster ?

I am currently using Runtime 10.4 LTS.The options available on Maven Central does not work as well as on PyPi.I am running:try:   from osgeo import gdal   except ImportError:   import gdalto validate but it throws ModuleNotFoundError: No module n...

  • 1678 Views
  • 1 replies
  • 0 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 0 kudos

@Abhishek Jain​  I can understand your issue it happens to me also multiple times so solving this issue I used to install the init script in my clusterMajor reason is that your 10X version does not support your current library so you have to find rig...

  • 0 kudos
Slalom_Tobias
by New Contributor III
  • 7230 Views
  • 1 replies
  • 1 kudos

AttributeError: 'SparkSession' object has no attribute '_wrapped' when attempting CoNLL.readDataset()

I'm getting the error...AttributeError: 'SparkSession' object has no attribute '_wrapped'---------------------------------------------------------------------------AttributeError Traceback (most recent call last)<command-2311820097584616> in <cell li...

  • 7230 Views
  • 1 replies
  • 1 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 1 kudos

this can happen in 10X version try to use 7.3 LTS and share your observationand if it not working there try to create init script and load it to your databricks cluster so whenever your machine go up you can get advantage of that library because some...

  • 1 kudos
rammy
by Contributor III
  • 1258 Views
  • 1 replies
  • 5 kudos

Not able to parse .doc extension file using scala in databricks notebook?

I could able to parse .doc extension files using Java programming with the help of POI libraries but when trying to convert Java code into Scala i expect it has to work with same java libraries with Scala programming but it is showing with below erro...

error screenshot Jar dependencies
  • 1258 Views
  • 1 replies
  • 5 kudos
Latest Reply
UmaMahesh1
Honored Contributor III
  • 5 kudos

Hi @Ramesh Bathini​ In pyspark, we have a docx module. I found that to be working perfectly fine. Can you try using that ?Documentation and stuff could be found online. Cheers...

  • 5 kudos
Snuki
by New Contributor II
  • 1361 Views
  • 4 replies
  • 3 kudos
  • 1361 Views
  • 4 replies
  • 3 kudos
Latest Reply
Harun
Honored Contributor
  • 3 kudos

I used to get these kind of error from databricks partner page, try to manually search the course that you are looking for. For example, when i used the link to navigate to datalakehouse foundational course page it showed the same error to me. i manu...

  • 3 kudos
3 More Replies
db-avengers2rul
by Contributor II
  • 6059 Views
  • 2 replies
  • 0 kudos

Resolved! delete files from the directory

Is there a way to delete recursively files using a command in notebookssince in the below directory i have many combination of files like .txt,,png,.jpg but i only want to delete files with .csv example dbfs:/FileStore/.csv*

  • 6059 Views
  • 2 replies
  • 0 kudos
Latest Reply
UmaMahesh1
Honored Contributor III
  • 0 kudos

Hi @Rakesh Reddy Gopidi​ You can use the os module to iterate over a directory.By using a loop over the directory, you can check what the file ends with using .endsWith(".csv).After fetching all the files, you can remove it. Hope this helps..Cheers.

  • 0 kudos
1 More Replies
UmaMahesh1
by Honored Contributor III
  • 3874 Views
  • 2 replies
  • 15 kudos

Resolved! Pyspark dataframe column comparison

I have a string column which is a concatenation of elements with a hyphen as follows. Let 3 values from that column looks like below, Row 1 - A-B-C-D-E-FRow 2 - A-B-G-C-D-E-FRow 3 - A-B-G-D-E-FI want to compare 2 consecutive rows and create a column ...

  • 3874 Views
  • 2 replies
  • 15 kudos
Latest Reply
NhatHoang
Valued Contributor II
  • 15 kudos

Hi,I think you can follow these steps:1. Use window function to create a new column by shifting, then your df will look like thisid value lag1 A-B-C-D-E-F null2 A-B-G-C-D-E-F A-B-C-D-E-F3 A-B-G-D-E-F ...

  • 15 kudos
1 More Replies
cozos
by New Contributor III
  • 3530 Views
  • 6 replies
  • 5 kudos

What does "ScalaDriverLocal: User Code Compile error" mean?

22/11/30 01:45:31 WARN ScalaDriverLocal: loadLibraries: Libraries failed to be installed: Set()   22/11/30 01:50:14 INFO Utils: resolved command to be run: WrappedArray(getconf, PAGESIZE) 22/11/30 01:50:15 WARN ScalaDriverLocal: User Code Compile err...

  • 3530 Views
  • 6 replies
  • 5 kudos
Latest Reply
cozos
New Contributor III
  • 5 kudos

Hi @Werner Stinckens​ thanks for the help. Unfortunately I don't think its so simple - I do have a JAR that I submitted as a Databricks JAR task, and the JAR does have the org.apache.beam class: I guess what I'm trying to understand is what does Scal...

  • 5 kudos
5 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!

Labels
Top Kudoed Authors