cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

sharonbjehome
by New Contributor
  • 742 Views
  • 1 replies
  • 1 kudos

Structered Streamin from MongoDB Atlas not parsing JSON correctly

HI all,I have a table in MongoDB Atlas that I am trying to read continuously to memory and then will write that file out eventually. However, when I look at the in-memory table it doesn't have the correct schema.Code here:from pyspark.sql.types impo...

image.png
  • 742 Views
  • 1 replies
  • 1 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 1 kudos

Hi @sharonbjehome​ , This has to be checked thoroughly via a support ticket, did you follow: https://docs.databricks.com/external-data/mongodb.html Also, could you please check with mongodb support, Was this working before?

  • 1 kudos
dara
by New Contributor
  • 507 Views
  • 1 replies
  • 1 kudos

How to count DelayCategories?

I would like to know how many count of each categories in each year, When I run count, it doesn't work.

image
  • 507 Views
  • 1 replies
  • 1 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 1 kudos

Hi, @Dara Tourt​ , When you say it does not work, what is the error? You can run count aggregate function. https://docs.databricks.com/sql/language-manual/functions/count.htmlPlease let us know if this helps.

  • 1 kudos
547284
by New Contributor II
  • 344 Views
  • 1 replies
  • 1 kudos

How to read in csvs from s3 directory with different columns

I can read all csvs under an S3 uri byu doing:files = dbutils.fs.ls('s3://example-path')df = spark.read.options(header='true',            encoding='iso-8859-1',            dateFormat='yyyyMMdd',            ignoreLeadingWhiteSpace='true',            i...

  • 344 Views
  • 1 replies
  • 1 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 1 kudos

Hi @Anthony Wang​ As of now, I think that's the only way. Please refer: https://docs.databricks.com/external-data/csv.html#pitfalls-of-reading-a-subset-of-columns. Please let us know if this helps.

  • 1 kudos
sage5616
by Valued Contributor
  • 4363 Views
  • 4 replies
  • 6 kudos

Saving PySpark standard out and standard error logs to cloud object storage

I am running my PySpark data pipeline code on a standard databricks cluster. I need to save all Python/PySpark standard output and standard error messages into a file in an Azure BLOB account.When I run my Python code locally I can see all messages i...

  • 4363 Views
  • 4 replies
  • 6 kudos
Latest Reply
sage5616
Valued Contributor
  • 6 kudos

This is the approach I am currently taking. It is documented here: https://stackoverflow.com/questions/62774448/how-to-capture-cells-output-in-databricks-notebook from IPython.utils.capture import CapturedIO capture = CapturedIO(sys.stdout, sys.st...

  • 6 kudos
3 More Replies
flora2408
by New Contributor II
  • 566 Views
  • 2 replies
  • 2 kudos

I have passed the Fundamentals Accreditation but I haven´t received my badge and certificate.

I have just passed  Fundamentals Accreditation i dont have the badge

  • 566 Views
  • 2 replies
  • 2 kudos
Latest Reply
LandanG
Honored Contributor
  • 2 kudos

Hi @FRANCISCO LORA​ @Kaniz Fatma​ knows more than me but you could probably submit a ticket to Databricks' Training Team here: https://help.databricks.com/s/contact-us?ReqType=training who will get back to you shortly. 

  • 2 kudos
1 More Replies
ajithkaythottil
by New Contributor
  • 318 Views
  • 0 replies
  • 0 kudos

usedlaptopcalicut.in

We Are Among The Most Reliable Used Laptop Sellers In Calicut. A Wide Variety Of Laptops From Different Brands To Suit Different Budgets Are Available At Us. The used laptops are in good condition and cost a fraction of what a brand-new laptop would....

used laptop in calicut
  • 318 Views
  • 0 replies
  • 0 kudos
Rahul_Tiwary
by New Contributor II
  • 3587 Views
  • 2 replies
  • 4 kudos

Getting Error "java.lang.NoSuchMethodError: org.apache.spark.sql.AnalysisException" while writing data to event hub for streaming. It is working fine if I am writing it to another data brick table

import org.apache.spark.sql._import scala.collection.JavaConverters._import com.microsoft.azure.eventhubs._import java.util.concurrent._import scala.collection.immutable._import org.apache.spark.eventhubs._import scala.concurrent.Futureimport scala.c...

  • 3587 Views
  • 2 replies
  • 4 kudos
Latest Reply
Gepap
New Contributor II
  • 4 kudos

The dataframe to write needs to have the following schema:Column | Type ---------------------------------------------- body (required) | string or binary partitionId (*optional) | string partitionKey...

  • 4 kudos
1 More Replies
196083
by New Contributor II
  • 805 Views
  • 2 replies
  • 2 kudos

iPython shell `set_next_input` not working

I'm running on 11.3 LTS. Expected Behavior:Databricks Notebook Behavior (it does nothing): You can also do `shell.set_next_input("test", replace=True)` to replace the current cell content which also doesn't work on Databricks. `set_next_input` stores...

Jupyter Shell Example Databricks Behavior
  • 805 Views
  • 2 replies
  • 2 kudos
Latest Reply
Kaniz
Community Manager
  • 2 kudos

 Hi @Ryan Eakman​, Can you try the DBR version 11.2?

  • 2 kudos
1 More Replies
horatiug
by New Contributor III
  • 2169 Views
  • 8 replies
  • 3 kudos

Create workspace in Databricks deployed in Google Cloud using terraform

In the documentation https://registry.terraform.io/providers/databricks/databricks/latest/docs https://docs.gcp.databricks.com/dev-tools/terraform/index.html I could not find documentation on how to provision Databricks workspaces in GCP. Only cre...

  • 2169 Views
  • 8 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @horatiu guja​ Does @Debayan Mukherjee​ response answer your question?If yes, would you be happy to mark it as best so that other members can find the solution more quickly? Else, we can help you with more details.

  • 3 kudos
7 More Replies
Arumugam
by New Contributor II
  • 1892 Views
  • 5 replies
  • 1 kudos

DLT Pipeline failed to Start due to "The Execution Contained atleast one disallowed language

Hi , im trying to setup DLT pipeline ,its a basic pipeline for testing purpose im facing the issue while starting the pipeline , any help is appreciated Code :@dlt.table(name="dlt_bronze_cisco_hardware")def dlt_cisco_networking_bronze_hardware(): ret...

Capture.PNG Capture
  • 1892 Views
  • 5 replies
  • 1 kudos
Latest Reply
Vivian_Wilfred
Honored Contributor
  • 1 kudos

Hi @Arumugam Ramachandran​ seems like you have a spark config set on your DLT job cluster that allows only python and SQL code. Check the spark config (cluster policy).In any case, the python code should work. Verify the notebook's default language, ...

  • 1 kudos
4 More Replies
sreedata
by New Contributor III
  • 2504 Views
  • 5 replies
  • 12 kudos

Resolved! Date field getting changed when reading from excel file to dataframe

The date field is getting changed while reading data from source .xls file to the dataframe. In the source xl file all columns are strings but i am not sure why date column alone behaves differentlyIn Source file date is 1/24/2022.In dataframe it is ...

  • 2504 Views
  • 5 replies
  • 12 kudos
Latest Reply
Pradeep_Namani
New Contributor III
  • 12 kudos

Hi Team, @Merca Ovnerud​ I am also facing same issue , below is the code snippet which I am using df=spark.read.format("com.crealytics.spark.excel").option("header","true").load("/mnt/dataplatform/Tenant_PK/Results.xlsx")I have a couple of date colum...

  • 12 kudos
4 More Replies
Anonymous
by Not applicable
  • 1020 Views
  • 2 replies
  • 0 kudos

Cluster Modes

Given that there are three different kinda of cluster modes, when is it appropriate to use each one?

  • 1020 Views
  • 2 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

Standard clustersA Standard cluster is recommended for a single user. Standard clusters can run workloads developed in any language: Python, SQL, R, and Scala.High Concurrency clustersA High Concurrency cluster is a managed cloud resource. The key be...

  • 0 kudos
1 More Replies
am777
by New Contributor
  • 2565 Views
  • 1 replies
  • 1 kudos

I am new to Databricks and SQL. My CASE statement is not working and I cannot figure out why. Below is my code and the error message I'm receiving. Grateful for any and all suggestions. I'm trying to put yrs_to_mat into buckets.

SELECT *, yrs_to_mat, CASE WHEN < 3 THEN "under3" WHEN => 3 AND < 5 THEN "3to5" WHEN => 5 AND < 10 THEN "5to10" WHEN => 10 AND < 15 THEN "10to15" WHEN => 15 THEN "over15" ELSE null END AS maturity_bucket FROM mat...

  • 2565 Views
  • 1 replies
  • 1 kudos
Latest Reply
Pat
Honored Contributor III
  • 1 kudos

Hi @Anne-Marie Wood​ ,I think it's more SQL general issue:you are not comparing any value to `< 3`it should be something like :WHEN X < 3 THEN "under3" SELECT *, yrs_to_mat, CASE WHEN X < 3 THEN "under3" WHEN X => 3 AND <...

  • 1 kudos
Labels
Top Kudoed Authors