cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

User16826992666
by Valued Contributor
  • 1237 Views
  • 3 replies
  • 0 kudos

If our company has an Enterprise Git server deployed on a private network, can we use Repos?

Our team would like to use the Repos functionality but our security prevents outside traffic through public networks. Is there any way we can still use Repos?

  • 1237 Views
  • 3 replies
  • 0 kudos
Latest Reply
User16781336501
New Contributor III
  • 0 kudos

Please contact your account team for some options that are in preview right now.

  • 0 kudos
2 More Replies
Siddhesh2525
by New Contributor III
  • 4481 Views
  • 2 replies
  • 6 kudos

How to pass dynamic value in databricks

I have separate column value defined in 13 diffrent notebook and i want merge into 1 databrick notebook and want to pass dynamic parameter using databrick so it will help me to run in single databricks notebook .

  • 4481 Views
  • 2 replies
  • 6 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 6 kudos

Hi @siddhesh Bhavar​ you can use widgets with the %run command to achieve this. https://docs.databricks.com/notebooks/widgets.html#use-widgets-with-run%run /path/to/notebook $X="10" $Y="1"

  • 6 kudos
1 More Replies
William_Scardua
by Valued Contributor
  • 4055 Views
  • 5 replies
  • 12 kudos

The database and tables disappears when I delete the cluster

Hi guys,I have a trial databricks account, I realized that when I shutdown the cluster my databases and tables is disappear .. that is correct or thats is because my account is trial ?

  • 4055 Views
  • 5 replies
  • 12 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 12 kudos

@William Scardua​ if it's an external hive metastore or Glue catalog you might be missing the configuration on the cluster. https://docs.databricks.com/data/metastores/index.htmlAlso as mentioned by @Hubert Dudek​ , if it's a community edition then t...

  • 12 kudos
4 More Replies
William_Scardua
by Valued Contributor
  • 5301 Views
  • 6 replies
  • 3 kudos

Resolved! How do you create a Sandbox in your data environment ?

Hi guys,How do you create a Sandbox in your data environment ? have any idea ?Azzure/AWS + Data Lake + Databricks

  • 5301 Views
  • 6 replies
  • 3 kudos
Latest Reply
missyT
New Contributor III
  • 3 kudos

In a sandbox environment, you will find the Designer enabled. You can activate Designer by selecting the design icon Designer. on a page, or by choosing the Design menu item in the Settings Settings menu.

  • 3 kudos
5 More Replies
Chris_Shehu
by Valued Contributor III
  • 4388 Views
  • 2 replies
  • 10 kudos

Resolved! When trying to use pyodbc connector to write files to SQL server receiving error. java.lang.ClassNotFoundException Any alternatives or ways to fix this?

jdbcUsername = ******** jdbcPassword = *************** server_name = "jdbc:sqlserver://***********:******" database_name = "********" url = server_name + ";" + "databaseName=" + database_name + ";"   table_name = "PatientTEST"   try: df.write \ ...

  • 4388 Views
  • 2 replies
  • 10 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 10 kudos

please check following code:df.write.jdbc( url="jdbc:sqlserver://<host>:1433;database=<db>;user=<user>;password=<password>;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;driver=com.microsof...

  • 10 kudos
1 More Replies
Ericsson
by New Contributor II
  • 1560 Views
  • 2 replies
  • 1 kudos

SQL week format issue its not showing result as 01(ww)

Hi Folks,I've requirement to show the week number as ww format. Please see the below codeselect weekofyear(date_add(to_date(current_date, 'yyyyMMdd'), +35)). also plz refre the screen shot for result.

result
  • 1560 Views
  • 2 replies
  • 1 kudos
Latest Reply
Lauri
New Contributor III
  • 1 kudos

You can use lpad() to achieve the 'ww' format.

  • 1 kudos
1 More Replies
Braxx
by Contributor II
  • 6424 Views
  • 12 replies
  • 2 kudos

Resolved! Validate a schema of json in column

I have a dataframe like below with col2 as key-value pairs. I would like to filter col2 to only the rows with a valid schema. There could be many of pairs, sometimes less, sometimes more and this is fine as long as the structure is fine. Nulls in col...

df
  • 6424 Views
  • 12 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

@Bartosz Wachocki​ - Thank you for sharing your solution and marking it as best.

  • 2 kudos
11 More Replies
pjp94
by Contributor
  • 3251 Views
  • 13 replies
  • 5 kudos

Pyspark vs Pandas

Would like to better understand the advantage of writing a python notebook in pyspark vs pandas. Does the entire notebook need to be written in pyspark to realize the performance benefits. I currently have a script using pandas for all my transformat...

  • 3251 Views
  • 13 replies
  • 5 kudos
Latest Reply
cconnell
Contributor II
  • 5 kudos

You can use the free Community Edition of Databricks that includes 10.0 runtime.

  • 5 kudos
12 More Replies
mangeldfz
by New Contributor III
  • 5183 Views
  • 8 replies
  • 8 kudos

Resolved! mlflow RESOURCE_ALREADY_EXISTS

I tried to log some run in my Databricks Workspace and I'm facing the following error: RESOURCE_ALREADY_EXISTS when I try to log any run.I could replicate the error with the following code:import mlflow import mlflow.sklearn from mlflow.tracking impo...

image.png
  • 5183 Views
  • 8 replies
  • 8 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 8 kudos

Hi @Miguel Ángel Fernández​  it’s not recommended to “link” the Databricks and AML workspaces, as we are seeing more problems. You can refer to the instructions found below for using MLflow with AML.   https://docs.microsoft.com/en-us/azure/machine-l...

  • 8 kudos
7 More Replies
sarvesh
by Contributor III
  • 2735 Views
  • 4 replies
  • 3 kudos

read percentage values in spark ( no casting )

I have a xlsx file which has a single column ;percentage30%40%50%-10%0.00%0%0.10%110%99.99%99.98%-99.99%-99.98%when i read this using Apache-Spark out put i get is,|percentage|+----------+| 0.3|| 0.4|| 0.5|| -0.1|| 0.0|| ...

  • 2735 Views
  • 4 replies
  • 3 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 3 kudos

Affirmative. This is how excel stores percentages. What you see is just cell formatting.Databricks notebooks do not (yet?) have the possibility to format the output.But it is easy to use a BI tool on top of Databricks, where you can change the for...

  • 3 kudos
3 More Replies
sarvesh
by Contributor III
  • 19802 Views
  • 18 replies
  • 6 kudos

Resolved! java.lang.OutOfMemoryError: GC overhead limit exceeded. [ solved ]

solution :- i don't need to add any executor or driver memory all i had to do in my case was add this : - option("maxRowsInMemory", 1000). Before i could n't even read a 9mb file now i just read a 50mb file without any error.{ val df = spark.read .f...

edit spark ui 2 edit spark ui 1
  • 19802 Views
  • 18 replies
  • 6 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 6 kudos

can you try without: .set("spark.driver.memory","4g") .set("spark.executor.memory", "6g")It is clearly show that there is no 4gb free on driver and 6gb free on executor (you can share hardware cluster details also).You can not also allocate 100% for ...

  • 6 kudos
17 More Replies
SailajaB
by Valued Contributor III
  • 9953 Views
  • 9 replies
  • 6 kudos

How to send a list as parameter in databricks notebook task

Hi,How we can pass a list as parameter in data bricks notebook to run the notebook parallelly for list of values.Thank you

  • 9953 Views
  • 9 replies
  • 6 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 6 kudos

another another way (in databricks you can achieve everything many ways) is to encode list using json library:import json print type(json.dumps([1, 2, 3])) #>> <type 'str'>

  • 6 kudos
8 More Replies
WillJMSFT
by New Contributor III
  • 1920 Views
  • 6 replies
  • 7 kudos

Resolved! How to import SqlDWRelation from com.databricks.spark.sqldw

Hello, All - I'm working on a project using the SQL DataWarehouse connector built into Databricks (https://docs.databricks.com/data/data-sources/azure/synapse-analytics.html). From there, I'm trying to extract information from the logical plan / logi...

  • 1920 Views
  • 6 replies
  • 7 kudos
Latest Reply
WillJMSFT
New Contributor III
  • 7 kudos

@Werner Stinckens​  Thanks for the reply! The SQL DW Connector itself is working just fine and I can retrieve the results from the SQL DW. I'm trying to extract the metadata (i.e. the Server, Database, and Table name) from the logical plan (or throu...

  • 7 kudos
5 More Replies
Dileep_Vidyadar
by New Contributor III
  • 1946 Views
  • 7 replies
  • 5 kudos

Not Able to create Cluster on Community Edition for 3-4 days.

I am learning Pyspark on Community edition for a like month. It's been great until I am facing issues while creating a cluster for 3-4 Days.Sometimes it is taking 30 minutes to 60 minutes to create a Cluster and sometimes it is not even creating a Cl...

  • 1946 Views
  • 7 replies
  • 5 kudos
Latest Reply
Anonymous
Not applicable
  • 5 kudos

@Dileep Vidyadara​  - If your question was fully answered by @Hubert Dudek​, would you be happy to mark his answer as best?

  • 5 kudos
6 More Replies
All_Users
by New Contributor II
  • 865 Views
  • 0 replies
  • 1 kudos

How do you upload a folder of csv files from your local machine into the Databricks platform?

I am working with time-series data, where each day is a separate csv file. I have tried to load a zip file to FileStore but then cannot use the magic command to unzip, most likely because it is in the tmp folder. Is there a workaround for this proble...

  • 865 Views
  • 0 replies
  • 1 kudos
Labels
Top Kudoed Authors