- 7011 Views
- 1 replies
- 0 kudos
How to detect if running in a workflow job?
Hi there,what's the best way to differentiate in what environment my Spark session is running? Locally I develop with databricks-connect's DatabricksSession, but that doesn't work when running a workflow job which requires SparkSession.getOrCreate()....
- 7011 Views
- 1 replies
- 0 kudos
- 0 kudos
Thanks, dbutils.notebook.getContext does indeed contain information about the job run.
- 0 kudos
- 2530 Views
- 2 replies
- 0 kudos
Limit the scope of workspace level access token to access only specific REST APIs of Databricks
Hi Community, Is there a way to limit the scope of workspace level token to hit only certain REST APIs of Databricks.In short, Once we generate a workspace level token following this doc. Link: https://docs.databricks.com/en/dev-tools/auth/oauth-m2m....
- 2530 Views
- 2 replies
- 0 kudos
- 0 kudos
<Replied to previous message as response to @Retired_mod's answer>
- 0 kudos
- 9525 Views
- 2 replies
- 1 kudos
Resolved! Configuring NCC does not show option to add private endpoints
Hi!I am following this guide: https://learn.microsoft.com/en-us/azure/databricks/security/network/serverless-network-security/serverless-private-linkHowever in Step 3: Create private endpoint rules, number 6 there is no option for me to Add a private...
- 9525 Views
- 2 replies
- 1 kudos
- 1 kudos
@saikumar246 you were correct. It was really super easy to set up and works flawlessly! Good job dev-team!
- 1 kudos
- 3784 Views
- 0 replies
- 0 kudos
FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'
import whisperimport ffmpegmodel = whisper.load_model("base")transcription = model.transcribe("dbfs:/FileStore/Call_Center_Conversation__03.mp3")print(transcription["text"])FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'I have import...
- 3784 Views
- 0 replies
- 0 kudos
- 3298 Views
- 1 replies
- 0 kudos
Make API Call to run job
Hi everyone,I want to trigger a run for a job using API Call.Here's my code"""import shleximport subprocessdef call_curl(curl):args = shlex.split(curl)process = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)stdout...
- 3298 Views
- 1 replies
- 0 kudos
- 0 kudos
- 0 kudos
- 2743 Views
- 1 replies
- 0 kudos
Resolved! combining accounts
I have an AWS based databricks account with a few workspaces and an Azure Databricks workspace. How do I combine them into one account?I am particularly interested in setting up a single billing drop with all my Databricks costs.
- 2743 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @BillGuyTheScien Greetings! Currently, we do not have such a feature to combine multiple cloud usage into a single account. We do have a feature request for the same and it is considered for future. Currently, there is no ETA on that. You can bro...
- 0 kudos
- 3872 Views
- 1 replies
- 0 kudos
Catalog issue
When i was trying to create catalog i got an error saying to mention azure storage account and storage container in the following query -CREATE CATALOG IF NOT EXISTS Databricks_Anu_Jal_27022024MANAGED LOCATION 'abfss://<databricks-workspace-stack-anu...
- 3872 Views
- 1 replies
- 0 kudos
- 0 kudos
- 0 kudos
- 4843 Views
- 0 replies
- 0 kudos
Run spark code in notebook by setting spark conf instead of databricks connect configure in runtime
Hi community, I wanted to understand if there is a way to pass config values to spark session in runtime than using databricks-connect configure to run spark code. One way I found out is given here: https://stackoverflow.com/questions/63088121/config...
- 4843 Views
- 0 replies
- 0 kudos
- 3930 Views
- 4 replies
- 1 kudos
sparkR.session
Why might this be erroring out? My understanding is that SparkR is built into Databricks.Code:library(SparkR, include.only=c('read.parquet', 'collect'))sparkR.session() Error:Error in sparkR.session(): could not find function "sparkR.session"
- 3930 Views
- 4 replies
- 1 kudos
- 1 kudos
It happens with any code; even something as simple as...x <- 2 + 2
- 1 kudos
- 2040 Views
- 0 replies
- 0 kudos
🌟 Welcome Newcomers! 🌟
Hello and welcome to our wonderful Community!Whether you are here by chance or intention, we're thrilled to have you join us. Before you dive into the plethora of discussions and activities happening here, we'd love to get to know you better! ...
- 2040 Views
- 0 replies
- 0 kudos
- 4685 Views
- 0 replies
- 0 kudos
TIMEZONE
Can I get some help from Databricks to help me understand how those timestamps being interpreted? Some are really confusing me. I have timestamp coming into AWS Databricks as String type. And the string timestamp is represented in UTC. I ran below qu...
- 4685 Views
- 0 replies
- 0 kudos
- 7390 Views
- 1 replies
- 0 kudos
Syntax of UPDATE Command in DataBricks
Hi All,I am testing the sql generated by our ETL software to see if it can run on data bricks SQL which I believe is Delta Tables underneath. This is the statement we are testing. As far as I can tell from the manual the from clause is not supported ...
- 7390 Views
- 1 replies
- 0 kudos
- 0 kudos
- 0 kudos
- 2363 Views
- 0 replies
- 0 kudos
Pandas_Udod max batch size not working in notebook
Hello I am trying to set max batch size for pandas-udf in Databricks notebook, but in my tests it doesn’t have any effect on size. spark.conf.set("spark.sql.execution.arrow.enabled", "true")spark.conf.set('spark.sql.execution.arrow.maxRecordsPerBatch...
- 2363 Views
- 0 replies
- 0 kudos
- 6663 Views
- 13 replies
- 1 kudos
Databricks Connect Scala -
Hi,I'm using Databricks Connect to run Scala code from IntelliJ on a Databricks single node cluster.Even with the simplest code, I'm experiencing this error:org.apache.spark.SparkException: grpc_shaded.io.grpc.StatusRuntimeException: INTERNAL: org.ap...
- 6663 Views
- 13 replies
- 1 kudos
- 1299 Views
- 1 replies
- 0 kudos
bitmap_count() function's output is different in databricks compared to snowflake
I have found that the results of the bitmap_count() function output differs significantly between databricks and snowflake.eg: snowflake returns a value of '1' for this code. "select bitmap_count(X'0001056c000000000000') " while Databricks returns a...
- 1299 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @vigneshp , Good Day! In Databricks, bitmap_count function returns the number of bits set in a BINARY string representing a bitmap. This function is typically used to count distinct values in combination with the bitmap_bucket_number() and the bi...
- 0 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
12.2 LST
1 -
Access Data
2 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Analytics
1 -
Apache spark
1 -
API
2 -
API Documentation
2 -
Architecture
1 -
Auto-loader
1 -
Autoloader
2 -
AWS
3 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
2 -
Azure data disk
1 -
Azure databricks
10 -
Azure Databricks SQL
5 -
Azure databricks workspace
1 -
Azure Unity Catalog
4 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Best Practices
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Bronze Layer
1 -
Bug
1 -
Catalog
1 -
Certification
1 -
Certification Exam
1 -
Certification Voucher
1 -
CICD
2 -
cleanroom
1 -
Cli
1 -
Cloud_files_state
1 -
cloudera sql
1 -
CloudFiles
1 -
Cluster
3 -
clusterpolicy
1 -
Code
1 -
Community Group
1 -
Community Social
1 -
Compute
2 -
conditional tasks
1 -
Connection
1 -
Cost
2 -
Credentials
1 -
CustomLibrary
1 -
CustomPythonPackage
1 -
DABs
1 -
Data Engineering
2 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
DataAISummit2023
1 -
DatabrickHive
1 -
databricks
2 -
Databricks Academy
1 -
Databricks Alerts
1 -
Databricks Audit Logs
1 -
Databricks Certified Associate Developer for Apache Spark
1 -
Databricks Cluster
1 -
Databricks Clusters
1 -
Databricks Community
1 -
Databricks connect
1 -
Databricks Dashboard
1 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Documentation
1 -
Databricks JDBC
1 -
Databricks Job
1 -
Databricks jobs
2 -
Databricks Lakehouse Platform
1 -
Databricks notebook
1 -
Databricks Notebooks
2 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks SQL
1 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks UI
1 -
Databricks Unity Catalog
3 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
1 -
DatabricksJobCluster
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
dbdemos
1 -
DBRuntime
1 -
DDL
1 -
deduplication
1 -
Delt Lake
1 -
Delta
13 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
6 -
Delta Sharing
2 -
deltaSharing
1 -
denodo
1 -
Deny assignment
1 -
Devops
1 -
DLT
9 -
DLT Pipeline
6 -
DLT Pipelines
5 -
DLTCluster
1 -
Documentation
2 -
Dolly
1 -
Download files
1 -
dropduplicatewithwatermark
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
1 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
Getting started
1 -
glob
1 -
Good Documentation
1 -
Google Bigquery
1 -
hdfs
1 -
Help
1 -
How to study Databricks
1 -
informatica
1 -
Jar
1 -
Java
1 -
JDBC Connector
1 -
Job Cluster
1 -
Job Task
1 -
Kubernetes
1 -
LightGMB
1 -
Lineage
1 -
LLMs
1 -
Login
1 -
Login Account
1 -
Machine Learning
1 -
MachineLearning
1 -
masking
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Metastore
1 -
MlFlow
2 -
Mlops
1 -
Model Serving
1 -
Model Training
1 -
Mount
1 -
Networking
1 -
nic
1 -
Okta
1 -
ooze
1 -
os
1 -
Password
1 -
Permission
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
policies
1 -
PostgresSQL
1 -
Pricing
1 -
pubsub
1 -
Pyspark
1 -
Python
2 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
RBAC
1 -
Repos Support
1 -
Reserved VM's
1 -
Reset
1 -
run a job
1 -
runif
1 -
S3
1 -
SAP SUCCESS FACTOR
1 -
Schedule
1 -
SCIM
1 -
Serverless
1 -
Service principal
1 -
Session
1 -
Sign Up Issues
2 -
Significant Performance Difference
1 -
Spark
2 -
sparkui
2 -
Splunk
1 -
sqoop
1 -
Start
1 -
Stateful Stream Processing
1 -
Storage Optimization
1 -
Structured Streaming ForeachBatch
1 -
suggestion
1 -
Summit23
2 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
tabrikck
1 -
Tags
1 -
Troubleshooting
1 -
ucx
2 -
Unity Catalog
1 -
Unity Catalog Error
2 -
Unity Catalog Metastore
1 -
UntiyCatalog
1 -
Update
1 -
user groups
1 -
Venicold
3 -
volumes
2 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
with open
1 -
Women
1 -
Workflow
2 -
Workspace
2
- « Previous
- Next »