cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

shkelzeen
by Databricks Partner
  • 3088 Views
  • 3 replies
  • 1 kudos

Databricks JDBC driver multi query in one request.

Can I run multi query in one command using databricks JDBC driver and would databricks execute one query faster then running multi queries in one script?  

  • 3088 Views
  • 3 replies
  • 1 kudos
Latest Reply
NandiniN
Databricks Employee
  • 1 kudos

Yes, you can run multiple queries in one command using the Databricks JDBC driver.The results will be displayed in separate tables. When you run the multiple queries, they are all still individual queries. Running multiple queries in a script will no...

  • 1 kudos
2 More Replies
Arindam19
by New Contributor II
  • 1237 Views
  • 3 replies
  • 0 kudos

Are row filters and column masks supported on foreign catalogs in Azure Databricks Unity Catalog?

In my solution I am planning to bring in an Azure SQL Database to Azure Databricks Unity Catalog as Foreign Catalog. Are table row filters and column masks supported in my scenario ?

  • 1237 Views
  • 3 replies
  • 0 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 0 kudos

Hi @Arindam19, Yes. Certain operations, including filtering, can be pushed down from Databricks to SQL Server. This is managed by querying the SQL Server directly via a federated connection, allowing SQL Server to handle the filter criteria and retur...

  • 0 kudos
2 More Replies
KaustubhShah
by New Contributor
  • 898 Views
  • 1 replies
  • 0 kudos

GCP Databricks Spark Connector for Cassandra - Error: com.typesafe.config.impl.ConfigImpl.newSimple

Hello,I am using Databricks runtime 12.2 with the spark connector - com.datastax.spark:spark-cassandra-connector_2.12:3.3.0as runtime 12.2 comes with spark 3.3.2 and scala 2.12. I encounter an issue with conneciting to cassandra DB using the below co...

  • 898 Views
  • 1 replies
  • 0 kudos
Latest Reply
cgrant
Databricks Employee
  • 0 kudos

Try using the assembly version of the jar with 12.2.  https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly  If this doesn't work, please paste the full, original stacktrace

  • 0 kudos
mrstevegross
by Contributor III
  • 3702 Views
  • 6 replies
  • 0 kudos

Resolved! Is it possible to obtain a job's event log via the REST API?

Currently, to investigate job performance, I can look at a job's information (via the UI) to see the "Event Log" (pictured below):I'd like to obtain this information programmatically, so I can analyze it across jobs. However, the docs for the `get` c...

mrstevegross_0-1736967992555.png
  • 3702 Views
  • 6 replies
  • 0 kudos
Latest Reply
mrstevegross
Contributor III
  • 0 kudos

I also see there is a "list cluster events" API (https://docs.databricks.com/api/workspace/clusters/events); can I get the event log this way?

  • 0 kudos
5 More Replies
crowley
by New Contributor III
  • 5133 Views
  • 2 replies
  • 1 kudos

Resolved! How are Struct type columns stored/accessed (interested in efficiency)?

Hello, I've searched around for awhile and didn't find a similar question here or elsewhere, so thought I'd ask...I'm assessing the storage/access efficiency of Struct type columns in delta tables.  I want to know more about how Databricks is storing...

  • 5133 Views
  • 2 replies
  • 1 kudos
Latest Reply
crowley
New Contributor III
  • 1 kudos

Thank you very much for the thoughful response.  Please excuse my belated feedback and thanks!

  • 1 kudos
1 More Replies
pardeep7
by Databricks Partner
  • 1426 Views
  • 3 replies
  • 0 kudos

Databricks Clean Rooms with 3 or more collaborators

Let's say I create a clean room with 2 other collaborators, call them collaborator A and collaborator B (so 3 in total, including me) and then shared some tables to the clean room. If collaborator A writes code that does a "SELECT * FROM creator.<tab...

  • 1426 Views
  • 3 replies
  • 0 kudos
Latest Reply
KaranamS
Contributor III
  • 0 kudos

Hi @pardeep7 , As per my understanding, all participants of clean room can only see metadata. The raw data in your tables is not directly accessed by other collaborators.Any output tables created by Collaborators based on the queries/notebooks will b...

  • 0 kudos
2 More Replies
harsh_Dev
by Databricks Partner
  • 1465 Views
  • 2 replies
  • 1 kudos

Resolved! Connect databricks community edition to datalake s3/adls2

Can anybody know how can i connect with aws s3 object storage with databricks community edition or can i connect with community databricks account or not ? 

  • 1465 Views
  • 2 replies
  • 1 kudos
Latest Reply
KaranamS
Contributor III
  • 1 kudos

Hi @harsh_Dev ,You can read from/write to AWS S3 with Databricks Community edition. As you will not be able to use instance profiles, you will need to configure the AWS credentials manually and access S3 using S3 URI. Try below code spark._jsc.hadoop...

  • 1 kudos
1 More Replies
AGnewbie
by New Contributor
  • 891 Views
  • 1 replies
  • 1 kudos

Required versus current compute setup

To run demo and lab notebooks, I am required to have the following Databricks runtime(s): 15.4.x-cpu-ml-scala2.12 but the compute in my setup is of the following runtime version, will that be an issue? 11.3 LTS (includes Apache Spark 3.3.0, Scala 2.1...

  • 891 Views
  • 1 replies
  • 1 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 1 kudos

Hello @AGnewbie, Firstly, regarding the Databricks runtime: your compute setup is currently running version 11.3 LTS, which will indeed be an issue as the specified version is not present in your current runtime. Hence, you need to update your runtim...

  • 1 kudos
Boyeenas
by Databricks Partner
  • 4360 Views
  • 1 replies
  • 0 kudos

Decimal(32,6) datatype in Databricks - precision roundoff

Hello All,I need your assistance. I recently started a migration project from Synapse Analytics to Databricks. While dealing with the datatypes, I came across a situation where in Dedicated Sql Pool the value is 0.033882, but in DataBricks the value ...

  • 4360 Views
  • 1 replies
  • 0 kudos
Latest Reply
KaranamS
Contributor III
  • 0 kudos

Hi @Boyeenas ,I believe your assumption is correct. Databricks is built on Apache Spark and the system applies rounding automatically based on the value of the subsequent digit. In your case, if the original value had a 7th decimal digit of 5 or high...

  • 0 kudos
mishrarit
by New Contributor
  • 964 Views
  • 1 replies
  • 0 kudos

job "run name" in "system" "lake flow" "job run timeline" table

For few jobs in unity catalog the "run name" is coming out to be "null" whereas for few we the complete name with system generated batch id. I am not sure how this field is populated and why for some job's the "run name" is present whereas for some i...

  • 964 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika_
Databricks Employee
  • 0 kudos

Hello @mishrarit! Run name in Unity Catalog job runs is determined by how the job is triggered. For manual runs, Databricks automatically generates a name, and for scheduled or API-triggered runs, the run name remains null unless explicitly defined.

  • 0 kudos
arne_c
by New Contributor II
  • 2185 Views
  • 2 replies
  • 0 kudos

Set up compute policy to allow installing python libraries from a private package index

In our organization, we maintain a bunch of libraries we share code with. They're hosted on a private python package index, which requires a token to allow downloads. My idea was to store the token as a secret which would then be loaded into a cluste...

  • 2185 Views
  • 2 replies
  • 0 kudos
Latest Reply
arne_c
New Contributor II
  • 0 kudos

I figured it out, seems like secrets can only be loaded into environment variables if the content is the secret and nothing else:"value": "{{secrets/global/arneCorpPyPI_token}}" # this will work"value": "foo {{secrets/global/arneCorpPyPI_toke...

  • 0 kudos
1 More Replies
GerardAlexander
by New Contributor III
  • 969 Views
  • 1 replies
  • 0 kudos

Creating Unity Catalog in Personal AZURE Portal Account

Seeking advice on the following:1. Given that I have a Personal - and not an Organization-based - AZURE Portal Account,    2. that I can see I am Global Admin and have Admin Role in Databricks,         3. then why can I not get "Manage Account" for a...

  • 969 Views
  • 1 replies
  • 0 kudos
Latest Reply
Takuya-Omi
Valued Contributor III
  • 0 kudos

@GerardAlexander Try signing in to the Account Console (https://accounts.azuredatabricks.net/login) using a user account with the appropriate permissions, rather than accessing it from the workspace.If you are unable to sign in, the following resourc...

  • 0 kudos
laeforceable
by New Contributor II
  • 3822 Views
  • 3 replies
  • 1 kudos

Power BI - Azure Databricks Connector shows Error AAD is not setup for domain

Hi Team,What I would like to do is understand what is required for PowerBI gateway to use single sign-on (AAD) to Databricks. Is that something you could have encountered before and know the fix? I currently get message from Power BI that AAD is not ...

image.png
  • 3822 Views
  • 3 replies
  • 1 kudos
Latest Reply
kkitsara
Databricks Partner
  • 1 kudos

Hello, did you have any solution for this? I am facing the same issue.

  • 1 kudos
2 More Replies
FanMichelle0729
by New Contributor II
  • 1616 Views
  • 5 replies
  • 0 kudos

Serveless compute does need has cloud accout(AWS、Google 、Azure)

I am a Databricks beginner, and I would like to ask if the Compute created in the Databricks account , it means also exists in the cloud account (e.g., AWS)? If the AWS account is deactivated, the existing compute will not be usable. This is what I h...

  • 1616 Views
  • 5 replies
  • 0 kudos
Latest Reply
Takuya-Omi
Valued Contributor III
  • 0 kudos

@FanMichelleTW No, Databricks recommends using serverless compute, and you can use serverless compute as well.To do so, open a notebook and check the top-right corner to see if a serverless compute option is in a Ready state. If it is, simply select ...

  • 0 kudos
4 More Replies
Labels