cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

sharonbjehome
by New Contributor
  • 2749 Views
  • 1 replies
  • 1 kudos

Structered Streamin from MongoDB Atlas not parsing JSON correctly

HI all,I have a table in MongoDB Atlas that I am trying to read continuously to memory and then will write that file out eventually. However, when I look at the in-memory table it doesn't have the correct schema.Code here:from pyspark.sql.types impo...

image.png
  • 2749 Views
  • 1 replies
  • 1 kudos
Latest Reply
Debayan
Databricks Employee
  • 1 kudos

Hi @sharonbjehome​ , This has to be checked thoroughly via a support ticket, did you follow: https://docs.databricks.com/external-data/mongodb.html Also, could you please check with mongodb support, Was this working before?

  • 1 kudos
dara
by New Contributor
  • 1568 Views
  • 1 replies
  • 1 kudos

How to count DelayCategories?

I would like to know how many count of each categories in each year, When I run count, it doesn't work.

image
  • 1568 Views
  • 1 replies
  • 1 kudos
Latest Reply
Debayan
Databricks Employee
  • 1 kudos

Hi, @Dara Tourt​ , When you say it does not work, what is the error? You can run count aggregate function. https://docs.databricks.com/sql/language-manual/functions/count.htmlPlease let us know if this helps.

  • 1 kudos
547284
by New Contributor II
  • 1440 Views
  • 1 replies
  • 1 kudos

How to read in csvs from s3 directory with different columns

I can read all csvs under an S3 uri byu doing:files = dbutils.fs.ls('s3://example-path')df = spark.read.options(header='true',            encoding='iso-8859-1',            dateFormat='yyyyMMdd',            ignoreLeadingWhiteSpace='true',            i...

  • 1440 Views
  • 1 replies
  • 1 kudos
Latest Reply
Debayan
Databricks Employee
  • 1 kudos

Hi @Anthony Wang​ As of now, I think that's the only way. Please refer: https://docs.databricks.com/external-data/csv.html#pitfalls-of-reading-a-subset-of-columns. Please let us know if this helps.

  • 1 kudos
sage5616
by Valued Contributor
  • 9237 Views
  • 3 replies
  • 6 kudos

Saving PySpark standard out and standard error logs to cloud object storage

I am running my PySpark data pipeline code on a standard databricks cluster. I need to save all Python/PySpark standard output and standard error messages into a file in an Azure BLOB account.When I run my Python code locally I can see all messages i...

  • 9237 Views
  • 3 replies
  • 6 kudos
Latest Reply
sage5616
Valued Contributor
  • 6 kudos

This is the approach I am currently taking. It is documented here: https://stackoverflow.com/questions/62774448/how-to-capture-cells-output-in-databricks-notebook from IPython.utils.capture import CapturedIO capture = CapturedIO(sys.stdout, sys.st...

  • 6 kudos
2 More Replies
ajithkaythottil
by New Contributor
  • 989 Views
  • 0 replies
  • 0 kudos

usedlaptopcalicut.in

We Are Among The Most Reliable Used Laptop Sellers In Calicut. A Wide Variety Of Laptops From Different Brands To Suit Different Budgets Are Available At Us. The used laptops are in good condition and cost a fraction of what a brand-new laptop would....

used laptop in calicut
  • 989 Views
  • 0 replies
  • 0 kudos
flora2408
by New Contributor II
  • 1735 Views
  • 1 replies
  • 2 kudos

I have passed the Fundamentals Accreditation but I haven´t received my badge and certificate.

I have just passed  Fundamentals Accreditation i dont have the badge

  • 1735 Views
  • 1 replies
  • 2 kudos
Latest Reply
LandanG
Databricks Employee
  • 2 kudos

Hi @FRANCISCO LORA​ @Kaniz Fatma​ knows more than me but you could probably submit a ticket to Databricks' Training Team here: https://help.databricks.com/s/contact-us?ReqType=training who will get back to you shortly. 

  • 2 kudos
Rahul_Tiwary
by New Contributor II
  • 8899 Views
  • 1 replies
  • 4 kudos

Getting Error "java.lang.NoSuchMethodError: org.apache.spark.sql.AnalysisException" while writing data to event hub for streaming. It is working fine if I am writing it to another data brick table

import org.apache.spark.sql._import scala.collection.JavaConverters._import com.microsoft.azure.eventhubs._import java.util.concurrent._import scala.collection.immutable._import org.apache.spark.eventhubs._import scala.concurrent.Futureimport scala.c...

  • 8899 Views
  • 1 replies
  • 4 kudos
Latest Reply
Gepap
New Contributor II
  • 4 kudos

The dataframe to write needs to have the following schema:Column | Type ---------------------------------------------- body (required) | string or binary partitionId (*optional) | string partitionKey...

  • 4 kudos
196083
by New Contributor II
  • 2535 Views
  • 1 replies
  • 2 kudos

iPython shell `set_next_input` not working

I'm running on 11.3 LTS. Expected Behavior:Databricks Notebook Behavior (it does nothing): You can also do `shell.set_next_input("test", replace=True)` to replace the current cell content which also doesn't work on Databricks. `set_next_input` stores...

Jupyter Shell Example Databricks Behavior
  • 2535 Views
  • 1 replies
  • 2 kudos
horatiug
by New Contributor III
  • 6660 Views
  • 8 replies
  • 3 kudos

Create workspace in Databricks deployed in Google Cloud using terraform

In the documentation https://registry.terraform.io/providers/databricks/databricks/latest/docs https://docs.gcp.databricks.com/dev-tools/terraform/index.html I could not find documentation on how to provision Databricks workspaces in GCP. Only cre...

  • 6660 Views
  • 8 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @horatiu guja​ Does @Debayan Mukherjee​ response answer your question?If yes, would you be happy to mark it as best so that other members can find the solution more quickly? Else, we can help you with more details.

  • 3 kudos
7 More Replies
Arumugam
by Databricks Partner
  • 6458 Views
  • 5 replies
  • 1 kudos

DLT Pipeline failed to Start due to "The Execution Contained atleast one disallowed language

Hi , im trying to setup DLT pipeline ,its a basic pipeline for testing purpose im facing the issue while starting the pipeline , any help is appreciated Code :@dlt.table(name="dlt_bronze_cisco_hardware")def dlt_cisco_networking_bronze_hardware(): ret...

Capture.PNG Capture
  • 6458 Views
  • 5 replies
  • 1 kudos
Latest Reply
Vivian_Wilfred
Databricks Employee
  • 1 kudos

Hi @Arumugam Ramachandran​ seems like you have a spark config set on your DLT job cluster that allows only python and SQL code. Check the spark config (cluster policy).In any case, the python code should work. Verify the notebook's default language, ...

  • 1 kudos
4 More Replies
sreedata
by New Contributor III
  • 6816 Views
  • 4 replies
  • 10 kudos

Resolved! Date field getting changed when reading from excel file to dataframe

The date field is getting changed while reading data from source .xls file to the dataframe. In the source xl file all columns are strings but i am not sure why date column alone behaves differentlyIn Source file date is 1/24/2022.In dataframe it is ...

  • 6816 Views
  • 4 replies
  • 10 kudos
Latest Reply
Pradeep_Namani
New Contributor III
  • 10 kudos

Hi Team, @Merca Ovnerud​ I am also facing same issue , below is the code snippet which I am using df=spark.read.format("com.crealytics.spark.excel").option("header","true").load("/mnt/dataplatform/Tenant_PK/Results.xlsx")I have a couple of date colum...

  • 10 kudos
3 More Replies
Anonymous
by Not applicable
  • 5928 Views
  • 2 replies
  • 0 kudos

Cluster Modes

Given that there are three different kinda of cluster modes, when is it appropriate to use each one?

  • 5928 Views
  • 2 replies
  • 0 kudos
Latest Reply
User16826994223
Databricks Employee
  • 0 kudos

Standard clustersA Standard cluster is recommended for a single user. Standard clusters can run workloads developed in any language: Python, SQL, R, and Scala.High Concurrency clustersA High Concurrency cluster is a managed cloud resource. The key be...

  • 0 kudos
1 More Replies
am777
by New Contributor
  • 9325 Views
  • 1 replies
  • 1 kudos

I am new to Databricks and SQL. My CASE statement is not working and I cannot figure out why. Below is my code and the error message I'm receiving. Grateful for any and all suggestions. I'm trying to put yrs_to_mat into buckets.

SELECT *, yrs_to_mat, CASE WHEN < 3 THEN "under3" WHEN => 3 AND < 5 THEN "3to5" WHEN => 5 AND < 10 THEN "5to10" WHEN => 10 AND < 15 THEN "10to15" WHEN => 15 THEN "over15" ELSE null END AS maturity_bucket FROM mat...

  • 9325 Views
  • 1 replies
  • 1 kudos
Latest Reply
Pat
Esteemed Contributor
  • 1 kudos

Hi @Anne-Marie Wood​ ,I think it's more SQL general issue:you are not comparing any value to `< 3`it should be something like :WHEN X < 3 THEN "under3" SELECT *, yrs_to_mat, CASE WHEN X < 3 THEN "under3" WHEN X => 3 AND <...

  • 1 kudos
LukaszJ
by Contributor III
  • 5941 Views
  • 5 replies
  • 4 kudos

Resolved! Mount Azure Blob Storage with Cluster access control

Hello.I want to mount and share for the one group the container from Azure Blob Storage (It could be simple blob storage or Azure Data Lake Storage gen 2). But I am not able to do it because I am using Cluster with Table Access Control.This is my cod...

  • 5941 Views
  • 5 replies
  • 4 kudos
Latest Reply
LukaszJ
Contributor III
  • 4 kudos

I have a good solution to the problem:I am using Python library.There are some documentation.Topic to be closed.Best regards,Łukasz

  • 4 kudos
4 More Replies
HashStudioz
by New Contributor
  • 798 Views
  • 0 replies
  • 0 kudos

Rs 485 IoT Gateway

RS-485 IoT Gateway is used for transmitting data from one device to another usually far away by using a wired LAN or a Wi-Fi. HashStudioz Technologies Inc. provides Smart IoT Gateway Solutions for Businesses like Pharma industries Etc. Our IoT Gatewa...

  • 798 Views
  • 0 replies
  • 0 kudos
Labels