cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

margutie
by New Contributor
  • 1308 Views
  • 0 replies
  • 0 kudos

Error from Knime trought proxy

I want to connect to Databricks from Knime on a company computer that uses a proxy. The error I'm encountering is as follows: ERROR Create Databricks Environment 3:1 Execute failed: Could not open the client transport with JDBC URI: jdbc:hive2://adb-...

  • 1308 Views
  • 0 replies
  • 0 kudos
Sreekanth_N
by New Contributor II
  • 4644 Views
  • 2 replies
  • 0 kudos

'NotebookHandler' object has no attribute 'setContext' in pyspark streaming in AWS

I am facing issue while calling dbutils.notebook.run() inside of pyspark streaming with concurrent.executor. At first the error is "pyspark.sql.utils.IllegalArgumentException: Context not valid. If you are calling this outside the main thread,you mus...

  • 4644 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kevin3
New Contributor III
  • 0 kudos

The error message you're encountering in PySpark when using dbutils.notebook.run() suggests that the context in which you are attempting to call the run() method is not valid. PySpark notebooks in Databricks have certain requirements when it comes to...

  • 0 kudos
1 More Replies
Sujitha
by Databricks Employee
  • 32465 Views
  • 3 replies
  • 7 kudos

Introducing the Data Intelligence Platforms

Introducing the Data Intelligence Platform, our latest AI-driven data platform constructed on a lakehouse architecture. It’s not just an incremental improvement over current data platforms, but a fundamental shift in product strategy and roadmap.   E...

Screenshot 2023-11-15 at 7.52.14 PM.png
  • 32465 Views
  • 3 replies
  • 7 kudos
Latest Reply
Hubert-Dudek
Databricks MVP
  • 7 kudos

Hmm I preferred naming related to water like data lake, delta lake and lakehouse

  • 7 kudos
2 More Replies
Khushisi
by New Contributor II
  • 907 Views
  • 0 replies
  • 0 kudos

Databricks to make a machine learning model

Hey all,I've been using a voice cloning AI and it's working well. I'm thinking of using Databricks to make a machine learning model for speech tech. I want to start with personal content creation. Any tips or advice would be great!

  • 907 Views
  • 0 replies
  • 0 kudos
VJ3
by Contributor
  • 4450 Views
  • 1 replies
  • 0 kudos

Azure Databricks Notebook Sharing, Notebook Exporting and Notebook Clipboard copy download

Hello,I would like to know in which scenario Azure Databricks User would be able to download Notebook Command Output if Notebook Result Download is disabled. Do we know if Privilege user would be able to share sensitive information with non-privilege...

  • 4450 Views
  • 1 replies
  • 0 kudos
Latest Reply
VJ3
Contributor
  • 0 kudos

Thank you Kaniz. Can we disable the exporting of notebook except Source File? If yes, then how do we achieve is?Also, we do not want to share the notebook which has any kind of notebook results, can we use spark.databricks.query.displayMaxRows and se...

  • 0 kudos
Phani1
by Databricks MVP
  • 1866 Views
  • 0 replies
  • 0 kudos

Billing usage per user

Hi Team ,Unity catalog is not enabled in our workspace, We would like to know the billing usage information per user ,could you please help us how to get these details( by using notebook level script).Regards,Phanindra

  • 1866 Views
  • 0 replies
  • 0 kudos
alexiswl
by Contributor
  • 10224 Views
  • 1 replies
  • 0 kudos

Resolved! 'Unity Catalog Volumes is not enabled on this instance' error

Hi all,tl;dr I ran the following on a docker-backed personal compute instance (running 13.3-LTS)```%sqlUSE CATALOG hail;USE SCHEMA volumes_testing;CREATE VOLUME 1kg    COMMENT 'Testing 1000 Genomes volume';```But this gives```ParseException: [UC_VOLU...

  • 10224 Views
  • 1 replies
  • 0 kudos
Latest Reply
alexiswl
Contributor
  • 0 kudos

Resolved with the setting "spark.databricks.unityCatalog.volumes.enabled" = "true"

  • 0 kudos
nyck33
by New Contributor II
  • 3022 Views
  • 1 replies
  • 0 kudos

Databricks learning festival, but my trial is over

I just emailed the onboarding-help email account to ask for an extension for 2 weeks as I want to complete the Data Engineer course to prepare for my new position. I have 2 accounts where the trial expired, one community account which cannot be used ...

  • 3022 Views
  • 1 replies
  • 0 kudos
Latest Reply
nyck33
New Contributor II
  • 0 kudos

is what happened when trying to sign up with another email.

  • 0 kudos
hukel
by Contributor
  • 5111 Views
  • 1 replies
  • 0 kudos

Resolved! Convert multiple string fields to int or long during streaming

Source data looks like: { "IntegrityLevel": "16384", "ParentProcessId": "10972929104936", "SourceProcessId": "10972929104936", "SHA256Hash": "a26a1ffb81a61281ffa55cb7778cc3fb0ff981704de49f75f51f18b283fba7a2", "ImageFileName": "\\Device\\Harddisk...

  • 5111 Views
  • 1 replies
  • 0 kudos
Latest Reply
hukel
Contributor
  • 0 kudos

Thanks for confirming that the readStream.withColumn() approach is the best available option.  Unfortunately, this will force me to maintain a separate notebook for each of the event types,  but it does work.   I was hoping to create just one paramet...

  • 0 kudos
anandreddy23
by New Contributor III
  • 5565 Views
  • 1 replies
  • 0 kudos

unpersist doesn't clear

from pyspark.sql import SparkSessionfrom pyspark import SparkContext, SparkConffrom pyspark.storagelevel import StorageLevelspark = SparkSession.builder.appName('TEST').config('spark.ui.port','4098').enableHiveSupport().getOrCreate()df4 = spark.sql('...

  • 5565 Views
  • 1 replies
  • 0 kudos
Latest Reply
anandreddy23
New Contributor III
  • 0 kudos

Thank you so much for taking time and explaining the concepts

  • 0 kudos
rpl
by Contributor
  • 4173 Views
  • 2 replies
  • 2 kudos

Bug report: the delimiter option does not work when run on DLT

I have a semicolon separated file in an ADLS container that's been added to Unity Catalog as an External location.When I run the following code on an all-purpose cluster, it runs ok and displays the schema.import dlt @dlt.table def test_data_csv(): ...

  • 4173 Views
  • 2 replies
  • 2 kudos
Latest Reply
rpl
Contributor
  • 2 kudos

@Retired_mod can you confirm that .option("delimiter", ";") is ignored when run in a DLT pipeline? (please see the post above) My colleage confirmed the behavior. 

  • 2 kudos
1 More Replies
MFrandsen
by New Contributor
  • 1140 Views
  • 0 replies
  • 0 kudos

Question for exam project

For my exam i have to do a small project for the company im interning at. I am creating a datawarehouse where i will have to transfer data from another database, and then transforming it to a star schema. would databricks be good for this, or is it t...

  • 1140 Views
  • 0 replies
  • 0 kudos
Labels