cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Spyro_3
by New Contributor
  • 594 Views
  • 1 replies
  • 0 kudos

Free trail account

Hi Can I able to create unity catalog using free trail account?

  • 594 Views
  • 1 replies
  • 0 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 0 kudos

Hi @Spyro_3 ,Yeah, you should be able to create unity catalog. Trial version allows you to create premium workspace which is required for unity catalog. Notice, that to setup metastore you also need to have Global Administrator permission (if we are ...

  • 0 kudos
yeungcase
by New Contributor III
  • 3824 Views
  • 7 replies
  • 3 kudos

Writing a single huge dataframe into Azure SQL Database using JDBC

Hi All,I am currently trying to read data from a materialized view as a single dataframe which contains around 10M of rows and then write it into an Azure SQL database. However, I don't see the spark job moving a bit even an hour is passed. I have al...

yeungcase_0-1719991764991.png yeungcase_0-1719991355253.png yeungcase_1-1719991508190.png
  • 3824 Views
  • 7 replies
  • 3 kudos
Latest Reply
Rishabh_Tiwari
Databricks Employee
  • 3 kudos

Hi @yeungcase , Thank you for reaching out to our community! We're here to help you. To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedba...

  • 3 kudos
6 More Replies
himanmon
by New Contributor III
  • 1839 Views
  • 3 replies
  • 2 kudos

Resolved! When does the cost of JOB COMPUTE start to be calculated?

I'm trying to run a workflow with job compute.Job compute needs to be pending for about 5 to 7 minutes before executing the workflow. I think it takes time to find a suitable instance in the cloud, configure the environment, install libraries, etc.An...

  • 1839 Views
  • 3 replies
  • 2 kudos
Latest Reply
jacovangelder
Honored Contributor
  • 2 kudos

There is a big discrepancy to take note of here. It is the difference between Databricks job costs (DBUs) and billing for the cloud provider (the actual VM doing the the work). Billing for the Databricks job starts when the Spark context is being ini...

  • 2 kudos
2 More Replies
SHS
by New Contributor
  • 1120 Views
  • 1 replies
  • 1 kudos

Error with: %run ../Includes/Classroom-Setup-SQL

Hi Guys,Just started the ASP 2.1L Spare SQL Lab and I get this error, when I run the first setup SQL command:%run ../Includes/Classroom-Setup-SQLThe execution of this command did not finish successfullyPython interpreter will be restarted.Python inte...

  • 1120 Views
  • 1 replies
  • 1 kudos
Latest Reply
cibele
New Contributor II
  • 1 kudos

You can create a cluster compatible with the notebooks. So It will work.

  • 1 kudos
OnerFusion-AI
by New Contributor
  • 49807 Views
  • 3 replies
  • 3 kudos

Resolved! How to import excel on databricks

To import an Excel file into Databricks, you can follow these general steps:1. **Upload the Excel File**:- Go to the Databricks workspace or cluster where you want to work.- Navigate to the location where you want to upload the Excel file.- Click on ...

  • 49807 Views
  • 3 replies
  • 3 kudos
Latest Reply
AhmedAlnaqa
Contributor
  • 3 kudos

The question here is how to read the multi-excel files based on path.The mentioned solution interacts with one file only, do we have the ability to read all the Excel files in the folder?

  • 3 kudos
2 More Replies
LLLMMM
by New Contributor III
  • 1307 Views
  • 3 replies
  • 2 kudos

Resolved! Try Databricks sign up failed

Hi, I am trying to use Databricks with the community edition. However, when I tried to create an account, the sign-up failed after I completed the puzzle. 

Screenshot 2024-07-05 at 20.45.53.png
  • 1307 Views
  • 3 replies
  • 2 kudos
Latest Reply
lovinchanglei
New Contributor II
  • 2 kudos

Thank you so much for sharing, this is really helpful.

  • 2 kudos
2 More Replies
YS1
by Contributor
  • 5715 Views
  • 6 replies
  • 0 kudos

Streaming xls files Using Auto Loader

Hello,Is there a way to read .xls files using auto loader or is there any workaround since excel files are not supported by the auto loader per the following document?https://docs.databricks.com/en/ingestion/auto-loader/options.htmlThanks.

Get Started Discussions
auto_loader
streaming
  • 5715 Views
  • 6 replies
  • 0 kudos
Latest Reply
Sicnarf
New Contributor II
  • 0 kudos

I am facing the same issue--I have a stream that I'd like to use autoloader on with an .xlsx. Is there any update to any workarounds on this issue?

  • 0 kudos
5 More Replies
divyasri1504
by New Contributor
  • 1804 Views
  • 0 replies
  • 0 kudos

File Not Found Error while reading pickle file

Hello, thereI have a pickle file uploaded in a mounted location in databricks ( /dbfs/mnt/blob/test.pkl). I am trying to read this pickle file using the below python snippetwith open(path + "test.pkl", "rb") as f:       bands = pickle.load(f)But it t...

  • 1804 Views
  • 0 replies
  • 0 kudos
vinitkhandelwal
by New Contributor III
  • 6482 Views
  • 2 replies
  • 0 kudos

Resolved! Using private package, getting ERROR: No matching distribution found for myprivatepackage

My project's setup.py filefrom setuptools import find_packages, setup PACKAGE_REQUIREMENTS = ["pyyaml","confluent-kafka", "fastavro", "python-dotenv","boto3", "pyxlsb", "aiohttp", "myprivatepackage"] LOCAL_REQUIREMENTS = ["delta-spark", "scikit-lea...

Get Started Discussions
dbx
package
private
python
  • 6482 Views
  • 2 replies
  • 0 kudos
Latest Reply
Debayan
Databricks Employee
  • 0 kudos

Hi, Does this look like a dependency error? All the dependencies are packed in the whl? Also, could you please confirm if all the limitations are satified? Refer:  https://docs.databricks.com/en/compute/access-mode-limitations.html 

  • 0 kudos
1 More Replies
ArvindDige
by New Contributor II
  • 3320 Views
  • 2 replies
  • 0 kudos

Resolved! Is DBFS going to be deprecated?

Is DBFS going to be deprecated? As I am using /dbfs/FileStore/tables/ location where a jar file is stored, and I am copying this jar file to /databricks/jars locations.My concerns is as DBFS root and mounts are deprecated, is that mean in coming days...

  • 3320 Views
  • 2 replies
  • 0 kudos
Latest Reply
ArvindDige
New Contributor II
  • 0 kudos

Hi Raphael,I am trying below init script to achieve this task, PFAAnd getting error as below,Cluster scoped init script abfss://container@storage.dfs.core.windows.net/init_script.sh failed: Failure to initialize configuration for storage account stor...

  • 0 kudos
1 More Replies
rt-slowth
by Contributor
  • 2057 Views
  • 1 replies
  • 2 kudos

How to update python's runtime on AWS lambda function

I heard that version 3.8 of Python on AWS Lambda will be EOL within the year. I would like to update this runtime, but where can I find the CloundFormation stack template.

  • 2057 Views
  • 1 replies
  • 2 kudos
Latest Reply
sandipkumar
New Contributor II
  • 2 kudos

Thanks. I went to AWS Cloudformation stack and edited the template from python 3.8 to 3.12 and updated. I did this for both the workspace stack and the s3 ingestion stack. Will it break anything? Do I need to make any changes in the python code in th...

  • 2 kudos
kiranpeesa
by New Contributor
  • 961 Views
  • 1 replies
  • 1 kudos

Error in notebook while execution

Error in callback <bound method UserNamespaceCommandHook.post_run_cell of <dbruntime.DatasetInfo.UserNamespaceCommandHook object at 0x7f5790c07070>> (for post_run_cell)

  • 961 Views
  • 1 replies
  • 1 kudos
Latest Reply
Witold
Honored Contributor
  • 1 kudos

https://community.databricks.com/t5/data-engineering/error-in-notebook-execution/m-p/76226#M35165

  • 1 kudos
Henrik_
by New Contributor III
  • 7862 Views
  • 2 replies
  • 0 kudos

Callback bound method error

 When executing a withColumn (running on DBR 14.3 LST) I get this error:Error in callback <bound method UserNamespaceCommandHook.post_run_cell of <dbruntime.DatasetInfo.UserNamespaceCommandHook object at 0x7feda2b2efb0>> (for post_run_cell):How shoul...

  • 7862 Views
  • 2 replies
  • 0 kudos
Latest Reply
TjommeV-Vlaio
New Contributor III
  • 0 kudos

We have the same issue using a shared cluster running DBR 14.3:Code executed: dfNew = dfTmp.withColumn(HashKeyColumnName, F.sha2(F.concat_ws("||", *ColumnList), 256))Error received: Error in callback <bound method UserNamespaceCommandHook.post_run_ce...

  • 0 kudos
1 More Replies
Zavi
by New Contributor
  • 1637 Views
  • 1 replies
  • 0 kudos

When are DLT going to support multiple targets

Due to the limitations with all output data needing to be stored in one target we have stopped using DLT until more flexibility is added. If anyone has a workaround we are open to suggestions. 

  • 1637 Views
  • 1 replies
  • 0 kudos
Latest Reply
Rafael-Ribeiro
New Contributor II
  • 0 kudos

Hi Zavi,One potential workaround is to establish multiple DLT pipelines, with each pipeline specifically configured to point to a unique target. This approach effectively allows for a diverse range of output data to be stored across various targets.T...

  • 0 kudos
nikhilprajapati
by New Contributor
  • 1649 Views
  • 2 replies
  • 1 kudos

Data in dataframe is also getting deleted when we are trying to delete records from underlying table

  Hi , We are trying to load data from a delta table to a dataframe(a copy of original table) . Initially delta table has count 911 . The dataframe in which the data is loaded also has the same count .Now,  we are deleting some records from the delta...

nikhilprajapati_1-1701930598953.png nikhilprajapati_2-1701930598960.png nikhilprajapati_3-1701930598967.png nikhilprajapati_4-1701930598974.png
  • 1649 Views
  • 2 replies
  • 1 kudos
Latest Reply
Hkesharwani
Contributor II
  • 1 kudos

Hi, There is a way to retain the copy of data frame, even if the data in underling table is manipulated but that's a memory expensive operation, be careful while using it.df1 = spark.createDataFrame(df.rdd.map(lambda x: x), schema=df.schema)Here we a...

  • 1 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels
Top Kudoed Authors