cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

shaunangcx
by New Contributor II
  • 1790 Views
  • 3 replies
  • 0 kudos

Resolved! Command output disappearing (Not sure what's the root cause)

I have a workflow which will run every month and it will create a new notebook containing the outputs from the main notebook. However, after some time, the outputs from the created notebook will disappear. Is there anyway I can retain the outputs?

  • 1790 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Shaun Ang​ :There are a few possible reasons why the outputs from the created notebook might be disappearing:Notebook permissions: It's possible that the user or service account running the workflow does not have permission to write to the destinati...

  • 0 kudos
2 More Replies
Shubham039
by New Contributor III
  • 5663 Views
  • 8 replies
  • 6 kudos

Databricks notebook ipywidgets not working as expected ( button click issue)

I am working on Azure databricks(IDE). I wanted to create a button which takes a text value as input and on the click of a button a function needed to be run which prints the value entered.For that I created this code:from IPython.display import disp...

  • 5663 Views
  • 8 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @Shubham Ringne​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us s...

  • 6 kudos
7 More Replies
lugger1
by New Contributor III
  • 1598 Views
  • 1 replies
  • 1 kudos

Resolved! What is the best way to use credentials for API calls from databricks notebook?

Hello, I have an Databricks account on Azure, and the goal is to compare different image tagging services from Azure, GCP, AWS via corresponding API calls, with Python notebook. I have problems with GCP vision API calls, specifically with credentials...

  • 1598 Views
  • 1 replies
  • 1 kudos
Latest Reply
lugger1
New Contributor III
  • 1 kudos

Ok, here is a trick: in my case, the file with GCP credentials is stored in notebook workspace storage, which is not visible to os.environ() command. So solution is to read a content of this file, and save it to the cluster storage attached to the no...

  • 1 kudos
Diego_MSFT
by New Contributor II
  • 2521 Views
  • 1 replies
  • 4 kudos

Automating the re run of job (with several Tasks) // automate the notification of a failed specific tasks after re trying // Error handling on azure data factory pipeline with DataBricks notebook

Hi DataBricks Experts:I'm using Databricks on Azure.... I'd like to understand the following:1) if there is way of automating the re run some specific failed tasks from a job (with several Tasks), for example if I have 4 tasks, and the task 1 and 2 h...

  • 2521 Views
  • 1 replies
  • 4 kudos
Latest Reply
Lindberg
New Contributor II
  • 4 kudos

You can use "retries".In Workflow, select your job, the task, and in the options below, configure retries.If so, you can also see more options at:https://learn.microsoft.com/pt-br/azure/databricks/dev-tools/api/2.0/jobs?source=recommendations

  • 4 kudos
Data_Engineer3
by Contributor II
  • 3641 Views
  • 4 replies
  • 5 kudos

How can i use the same spark session from onenotebook to another notebook in databricks

I want to use the same spark session which created in one notebook and need to be used in another notebook in across same environment, Example, if some of the (variable)object got initialized in the first notebook, i need to use the same object in t...

  • 3641 Views
  • 4 replies
  • 5 kudos
Latest Reply
Manoj12421
Valued Contributor II
  • 5 kudos

You can use %run and then use the location of the notebook - %run "/folder/notebookname"

  • 5 kudos
3 More Replies
Ligaya
by New Contributor II
  • 18522 Views
  • 3 replies
  • 2 kudos

ValueError: not enough values to unpack (expected 2, got 1)

Code:Writer.jdbc_writer("Economy",economy,conf=CONF.MSSQL.to_dict(), modified_by=JOB_ID['Economy'])The problem arises when i try to run the code, in the specified databricks notebook, An error of "ValueError: not enough values to unpack (expected 2, ...

  • 18522 Views
  • 3 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

@Jillinie Park​ :The error message you are seeing ("ValueError: not enough values to unpack (expected 2, got 1)") occurs when you try to unpack an iterable object into too few variables. In your case, the error is happening on this line of code:schem...

  • 2 kudos
2 More Replies
Anonymous
by Not applicable
  • 1588 Views
  • 1 replies
  • 1 kudos

Testing framework using Databricks Notebook and Pytest.

Hi Friends,I am designing a Testing framework using Databricks and pytest. Currently stuck with report generation, that is generating blank with only default parameters only .for ex :-testsuites><testsuite name="pytest" errors="0" failures="0" skippe...

  • 1588 Views
  • 1 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

@Vijaya Palreddy​ :There are several testing frameworks available for data testing that you can consider using with Databricks and Pytest:Great Expectations: Great Expectations is an open-source framework that provides a simple way to create and main...

  • 1 kudos
Michael_Marquis
by New Contributor II
  • 1774 Views
  • 1 replies
  • 3 kudos

How can I change the font size on ipywidgets in a Databricks notebook?

I'm trying to create a simple UI for a notebook using the recently implemented support for ipywidgets, but I'm having a hard time figuring out how to change certain style attributes like font size and color in widgets that should accept those style p...

  • 1774 Views
  • 1 replies
  • 3 kudos
Latest Reply
Miguel_Suarez
New Contributor III
  • 3 kudos

Hey Michael,The example you're trying to run is for ipywidgets 8, we currently have ipywidgets 7-which has fewer button customizations. I believe the only font customization available in 7 is "font_weigh t" (no space). I hope this helps.Best,Miguel

  • 3 kudos
Ajay-Pandey
by Esteemed Contributor III
  • 1874 Views
  • 3 replies
  • 5 kudos

Support of running multiple cells at a time in databricks notebook Hi all,Now databricks notebook supports parallel run of commands in a single notebo...

Support of running multiple cells at a time in databricks notebookHi all,Now databricks notebook supports parallel run of commands in a single notebook that will help run ad hoc queries simultaneously without creating a separate notebook.Once you run...

image.png image
  • 1874 Views
  • 3 replies
  • 5 kudos
Latest Reply
Anonymous
Not applicable
  • 5 kudos

Hi @Ajay Pandey​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so w...

  • 5 kudos
2 More Replies
ahmedE_
by New Contributor II
  • 2463 Views
  • 6 replies
  • 0 kudos

How to install AI library aif360 on databricks notebook

Hello,I'm trying to install a library called aif360 on the databricks notebook. However, I get error that tkinter is not installed.I tried installing tk and tk-tools, but still the issue remains. Any idea on what solution we can use? I also tried ins...

no way to instal tkinter
  • 2463 Views
  • 6 replies
  • 0 kudos
Latest Reply
Vartika
Moderator
  • 0 kudos

Hi @Ahmed Elghareeb​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.This will ...

  • 0 kudos
5 More Replies
kll
by New Contributor III
  • 3042 Views
  • 1 replies
  • 1 kudos

Resolved! OSError: Invalid argument when attempting to save a pandas dataframe to csv

I am attempting to save a pandas DataFrame to as csv to a directory I created in Databricks workspace or in the `cwd`. import pandas as pd   import os   df.to_csv("data.csv", index=False)   df.to_csv(str(os.getcwd()) + "/data.csv", index=False)      ...

  • 3042 Views
  • 1 replies
  • 1 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 1 kudos

Hi @Keval Shah​ ,You can save your dataframe to csv in dbfs storage.Please refer below code that might help you-df = pd.read_csv(StringIO(data), sep=',') #print(df) df.to_csv('/dbfs/FileStore/ajay/file1.txt')

  • 1 kudos
irfanaziz
by Contributor II
  • 2357 Views
  • 1 replies
  • 3 kudos

TimestampFormat issue

The databricks notebook failed yesterday due to timestamp format issue. error:"SparkUpgradeException: You may get a different result due to the upgrading of Spark 3.0: Fail to parse '2022-08-10 00:00:14.2760000' in the new parser. You can set spark.s...

  • 2357 Views
  • 1 replies
  • 3 kudos
Latest Reply
searchs
New Contributor II
  • 3 kudos

You must have solved this issue by now but for the sake of those that encounter this again, here's the solution that worked for me:spark.sql("set spark.sql.legacy.timeParserPolicy=LEGACY")

  • 3 kudos
juned
by New Contributor III
  • 6660 Views
  • 4 replies
  • 9 kudos

Resolved! Databricks CLI configure (using AAD-TOKEN) in the Databricks notebook `%sh` mode

Hello everyone, I am trying to setup Databricks CLI by referring to the Databricks CLI documentation. When I setup using the Personal Access Token, it works fine and I am able to access the workspace and fetch the results from the same workspace in D...

image
  • 6660 Views
  • 4 replies
  • 9 kudos
Latest Reply
Anonymous
Not applicable
  • 9 kudos

Hi @Juned Mala​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 9 kudos
3 More Replies
AmineHY
by Contributor
  • 14362 Views
  • 4 replies
  • 1 kudos

Resolved! How to get rid of "Command result size exceeds limit"

I am working on Databricks Notebook and trying to display a map using Floium and I keep getting this error > Command result size exceeds limit: Exceeded 20971520 bytes (current = 20973510)How can I get increase the memory limit?I already reduced the...

  • 14362 Views
  • 4 replies
  • 1 kudos
Latest Reply
labromb
Contributor
  • 1 kudos

Hi, I have the same problem with keplergl, and the save to disk option, whilst helpful isn't super practical... So how does one plot large datasets in kepler?Any thought welcome

  • 1 kudos
3 More Replies
Labels