by
elgeo
• Valued Contributor II
- 2956 Views
- 3 replies
- 3 kudos
Hello! Is there an equivalent of Create trigger on a table in Databricks sql?CREATE TRIGGER [schema_name.]trigger_nameON table_nameAFTER {[INSERT],[UPDATE],[DELETE]}[NOT FOR REPLICATION]AS{sql_statements}Thank you in advance!
- 2956 Views
- 3 replies
- 3 kudos
Latest Reply
You can try Auto Loader: Auto Loader supports two modes for detecting new files: directory listing and file notification.Directory listing: Auto Loader identifies new files by listing the input directory. Directory listing mode allows you to quickly ...
2 More Replies
- 391 Views
- 1 replies
- 0 kudos
Im working with the sample notebook named '1_Customer Lifetimes.py' in https://github.com/databricks-industry-solutions/customer-lifetime-valueIn notebook, there is the code like this `%run "./config/Data Extract"`This load excel data however it occu...
- 391 Views
- 1 replies
- 0 kudos
Latest Reply
@Seungsu Lee​ It could be a destination host issue, configuration issue or network issue.Hard to guess, first check if your cluster has an access to the public internet by running this command:%sh ping -c 2 google.com
by
Phani1
• Valued Contributor
- 1982 Views
- 1 replies
- 0 kudos
Problem Statement:We have a scenario where we get the data from the source in the format of (in actual 20 Levels and number of fields are more than 4 but for ease of understanding let’s consider below)The actual code involved 20 levels of 4-5 fields ...
- 1982 Views
- 1 replies
- 0 kudos
Latest Reply
I don't think that we have anything similar as a built-in function. You'll need to write some custom code to achieve that.
- 2469 Views
- 12 replies
- 13 kudos
I have set up a DLT with "testing" set as the target database. I need to join data that exists in a "keys" table in my "beta" database, but I get an AccessDeniedException, despite having full access to both databases via a normal notebook.A snippet d...
- 2469 Views
- 12 replies
- 13 kudos
Latest Reply
As an update to this issue: I was running the DLT pipeline on a personal cluster that had an instance profile defined (as per databricks best practises). As a result, the pipeline did not have permission to access other s3 resources (e.g other databa...
11 More Replies
- 2707 Views
- 6 replies
- 3 kudos
Hi Fellas - I'm trying to load parquet data (in GCS location) into Postgres DB (google cloud) . For bulk upload data into PG we are using (spark-postgres library)https://framagit.org/interhop/library/spark-etl/-/tree/master/spark-postgres/src/main/sc...
- 2707 Views
- 6 replies
- 3 kudos
Latest Reply
Hi @Kaniz Fatma​ , @Daniel Sahal​ - Few updates from my side.After so many hits and trials , psycopg2 worked out in my case.We can process 200+GB data with 10 node cluster (n2-highmem-4,32 GB Memory, 4 Cores) and driver 32 GB Memory, 4 Cores with Run...
5 More Replies
- 427 Views
- 1 replies
- 0 kudos
Hello,Apologize for dumb question but i'm new to Databricks and need clarification on following.Are parallel and subsequent jobs able to reuse the same compute resources to keep time and cost overhead as low as possible vs. are they spinning a new cl...
- 427 Views
- 1 replies
- 0 kudos
Latest Reply
@tanja.savic tanja.savic​ You can use shared job cluster:https://docs.databricks.com/workflows/jobs/jobs.html#use-shared-job-clustersBut remember that a shared job cluster is scoped to a single job run, and cannot be used by other jobs or runs of the...
by
Phani1
• Valued Contributor
- 513 Views
- 1 replies
- 1 kudos
Hi Team ,Can we call the dashboard from another dashboard? An example screenshot is attached.Main Dashboard has 3 buttons that point to 3 different dashboards and if we click any of the buttons it has to redirect to the respective dashboard.
- 513 Views
- 1 replies
- 1 kudos
Latest Reply
@Janga Reddy​ I don't think that this is possible at this moment.You can raise a feature request here: https://docs.databricks.com/resources/ideas.html
by
Ancil
• Contributor II
- 1334 Views
- 3 replies
- 1 kudos
I have pandas_udf, its working for 1 rows, but I tried with more than one rows getting below error.PythonException: 'RuntimeError: The length of output in Scalar iterator pandas UDF should be the same with the input's; however, the length of output w...
- 1334 Views
- 3 replies
- 1 kudos
Latest Reply
I was testing, and your function is correct. So you need to have an error in inputData type (is all string) or with result_json. Please also check the runtime version. I was using 11 LTS.
2 More Replies
by
Brave
• New Contributor II
- 2117 Views
- 6 replies
- 4 kudos
Hi all.I am trying to export R data frame variable as csv file.I am using this formula:df<- data.frame(VALIDADOR_FIM)df.coalesce(1).write.format("com.databricks.spark.csv").option("header", "true").save("dbfs:/FileStore/df/df.csv")But isn´t working. ...
- 2117 Views
- 6 replies
- 4 kudos
Latest Reply
Hi @FELIPE VALENTE​ (Customer)​, We haven’t heard from you since the last response from @sherbin w​ (Customer)​ , and I was checking back to see if his suggestions helped you.Or else, If you have any solution, please share it with the community, as...
5 More Replies
by
Prem1
• New Contributor III
- 8090 Views
- 21 replies
- 11 kudos
I am using Databricks Autoloader to load JSON files from ADLS gen2 incrementally in directory listing mode. All source filename has Timestamp on them. The autoloader works perfectly couple of days with the below configuration and breaks the next day ...
- 8090 Views
- 21 replies
- 11 kudos
Latest Reply
Hi Everyone,I'm seeing this issue as well - same configuration of the previous posts, using autoloader with incremental file listing turned on. The strange part is that it mostly works despite almost all of the files we're loading having colons incl...
20 More Replies
- 2369 Views
- 4 replies
- 2 kudos
The azure event hub "my_event_hub" has a total of 5 partitions ("0", "1", "2", "3", "4")The readstream should only read events from partitions "0" and "4"event hub configuration as streaming source:-val name = "my_event_hub"
val connectionString = "m...
- 2369 Views
- 4 replies
- 2 kudos
Latest Reply
I tried using below snippet to receive messages only from partition id=0ehName = "<<EVENT-HUB-NAME>>"
# Create event position for partition 0
positionKey1 = {
"ehName": ehName,
"partitionId": 0
}
eventPosition1 = {
"offset": "@latest",
...
3 More Replies
- 2447 Views
- 3 replies
- 0 kudos
i just want to add color to excel sheet by python to specific cells, and i done that, but i need to exclude the header column, then if i tried the same method to other sheet it doesn't worked.​​but that bg color addition is reflected in one sheet but...
- 2447 Views
- 3 replies
- 0 kudos
Latest Reply
Convert your dataframe to pandas on sparkcolor cells using style property https://spark.apache.org/docs/latest/api/python/reference/pyspark.pandas/api/pyspark.pandas.DataFrame.style.htmlexport to excel using pandas to_excel https://spark.apache.org/d...
2 More Replies
- 467 Views
- 1 replies
- 0 kudos
HiI would like to ask for recommendations regarding the size of the driver and the amount of executors managed by that driver. I am aware of the best practices regarding executor size/number but I have doubts about the number of executors a single dr...
- 467 Views
- 1 replies
- 0 kudos
Latest Reply
Depends on your use case. The best is to connect Datatog and see driver and workers utilization https://docs.datadoghq.com/integrations/databricks/?tab=driveronlyJust from my experience, Usually, for big datasets, when need autoscale workers between ...
- 990 Views
- 1 replies
- 1 kudos
Hello all,I would like to know why task times (among other times in Spark UI) display values like 1h 2h when the task does only really take some seconds or minutes. What is the meaning of these high time values I see all around Spark UI.Thanks in adv...
- 990 Views
- 1 replies
- 1 kudos
Latest Reply
that is accumulated time.https://stackoverflow.com/questions/73302982/task-time-and-gc-time-calculation-in-spark-ui-in-executor-section.
- 1619 Views
- 3 replies
- 0 kudos
I am getting the following error when accessing the file in Azure blob storagejava.io.FileNotFoundException: File /10433893690638/mnt/22200/22200Ver1.sps does not exist.Code:ves_blob = dbutils.widgets.get("ves_blob")
try:
dbutils.fs.ls(ves_blob )
e...
- 1619 Views
- 3 replies
- 0 kudos
Latest Reply
that is certainly an invalid path, as the error shows.with %fs ls /mnt you can show the directory structure of the /mnt directory, assuming the blob storage is mounted.if not, you need to define the access ( URL etc.)
2 More Replies