- 6418 Views
- 5 replies
- 17 kudos
Optimize -> VacuumorVacuum -> Optimize
- 6418 Views
- 5 replies
- 17 kudos
Latest Reply
I optimize first as delta lake knows which files are relevant for the optimize. Like that I have my optimized data available faster. Then a vacuum. Seemed logical to me, but I might be wrong. Never actually thought about it
4 More Replies
- 1067 Views
- 2 replies
- 4 kudos
I have a Spark SQL notebook on DB where I have a sql query likeSELECT *
FROM table_name
WHERE
condition_1 = 'fname' OR condition_1 = 'lname' OR condition_1 = 'mname'
AND condition_2 = 'apple'
AND condition_3 ='orange'There are a lot ...
- 1067 Views
- 2 replies
- 4 kudos
Latest Reply
Hi @John Constantine​ ,I think you can also use arrays_overlap() for your OR statements docs here
1 More Replies
by
Braxx
• Contributor II
- 1146 Views
- 5 replies
- 5 kudos
I would like to implement a simple logic:if Df1 is empty return Df2 else newDf = Df1.union(Df2)May happened that Df1 is empty and the output is simply: []. In that case I do not need union.I have it like this but getting error when creating datafra...
- 1146 Views
- 5 replies
- 5 kudos
by
Braxx
• Contributor II
- 4266 Views
- 7 replies
- 5 kudos
I am doing a convertion of a data frame to nested dict/json. One of the column called "Problematic__c" is boolean type.For some reason json does not accept this data type retriving error: "Object of type bool_ is not JSON serializable" I need this as...
- 4266 Views
- 7 replies
- 5 kudos
by
Manoj
• Contributor II
- 6611 Views
- 4 replies
- 8 kudos
Is there a way to submit multiple queries to data bricks SQL END POINT using REST API ?
- 6611 Views
- 4 replies
- 8 kudos
Latest Reply
@Manoj Kumar Rayalla​ DBSQL currently limits execution to 10 concurrent queries per cluster so there could be some queuing with 30 concurrent queries. You may want to turn on multi-cluster load balancing to horizontally scale with 1 more cluster for...
3 More Replies
- 1050 Views
- 3 replies
- 3 kudos
Is there an alerting api so that alerts can be source controlled and automated, please ?https://docs.databricks.com/sql/user/alerts/index.html
- 1050 Views
- 3 replies
- 3 kudos
Latest Reply
Dan_Z
Honored Contributor
Hello @Nick Hughes​ , as of today we do not expose or document the API for these features. I think it will be a useful feature so I created an internal feature request for it (DB-I-4289). If you (or any future readers) want more information on this f...
2 More Replies
- 1521 Views
- 7 replies
- 2 kudos
Hi guys,​Look that case: Company ACME (hypothetical company)​This company does not use delta, but uses open source Spark to process raw data for .parquet, we have a 'sales' process which consists of receiving every hour a new dataset (.csv) within th...
- 1521 Views
- 7 replies
- 2 kudos
Latest Reply
Hi @Jose Gonzalez​ , ​I agree the best option is to use auto load, but some cases you don`t have the databricks plataform and don`t use delta, i this cases you need build a way to process the new raw files
6 More Replies
by
kaslan
• New Contributor II
- 4283 Views
- 6 replies
- 0 kudos
I want to set up an S3 stream using Databricks Auto Loader. I have managed to set up the stream, but my S3 bucket contains different type of JSON files. I want to filter them out, preferably in the stream itself rather than using a filter operation.A...
- 4283 Views
- 6 replies
- 0 kudos
Latest Reply
According to the docs you linked, the glob filter on input-path only works on directories, not on the files themselves.So if you want to filter on certain files in the concerning dirs, you can include an additional filter through the pathGlobFilter o...
5 More Replies
- 8665 Views
- 7 replies
- 3 kudos
I have a function making api calls. I want to run this function in parallel so I can use the workers in databricks clusters to run it in parallel. I have tried with ThreadPoolExecutor() as executor: results = executor.map(getspeeddata, alist)to run m...
- 8665 Views
- 7 replies
- 3 kudos
Latest Reply
You guys are not getting the point, I am making API calls in a function and want to store the results in a dataframe. I want multiple processes to run this task in parallel. How do I create a UDF and use it in a dataframe when the task is calling an ...
6 More Replies
- 253 Views
- 0 replies
- 0 kudos
Hi juliet Wu I have completed my databricks apache spark associate developer exam on 7/10/2021 after subsequent completion of my exam I got my badge to my Webaccesor Mail immediately after 1 day of exam which is on 8/10/2021​.but I didn't received my...
- 253 Views
- 0 replies
- 0 kudos
by
Mihai1
• New Contributor III
- 1267 Views
- 1 replies
- 2 kudos
Is it possible to source control the dashboard along with a notebook code? When source controlling a python notebook it gets converted to *.py. It looks like the resulting *.py file loses the information about the dashboard cells. Thus, if this *.py ...
- 1267 Views
- 1 replies
- 2 kudos
Latest Reply
Dan_Z
Honored Contributor
No, you will need to save as another source, like DBC Archive, to replicate the Notebook features.
- 3318 Views
- 11 replies
- 7 kudos
I am moving an existing, working pandas program into Databricks. I want to use the new pyspark.pandas library, and change my code as little as possible. It appears that I should do the following:1) Add from pyspark import pandas as ps at the top2) Ch...
- 3318 Views
- 11 replies
- 7 kudos
Latest Reply
Make sure to use the 10.0 Runtime which includes Spark 3.2
10 More Replies
- 5312 Views
- 7 replies
- 2 kudos
Hi all, So far I have been successfully using the CLI interface to upload files from my local machine to DBFS/FileStore/tables. Specifically, I have been using my terminal and the following command: databricks fs cp -r <MyLocalDataset> dbfs:/FileStor...
- 5312 Views
- 7 replies
- 2 kudos
Latest Reply
hi @Ignacio Castineiras​ ,If Arjun.kr's fully answered your question, would you be happy to mark their answer as best so that others can quickly find the solution?Please let us know if you still are having this issue.
6 More Replies
- 233 Views
- 0 replies
- 0 kudos
1. DIFFERENT TYPES OF TACTICAL GEAR1. HARDWAREOptical hardware, for instance, cuffs, laser sights, optics, and night vision goggles accompany a huge group of features and capacities. Packs and pockets are made of climate-safe material planned to ke...
- 233 Views
- 0 replies
- 0 kudos
- 759 Views
- 2 replies
- 1 kudos
Hi !I'm working on a project at my company on Databricks using Scala and Spark. I'm new to Spark and Databricks and so I would like to know how to create a table on specific location (on the Delta Lake of my company). In SQL + some Delta features, I ...
- 759 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @Adrien MERAT​ ,I would like to share the following documentation that will provide examples on how to create Delta tables:Create Delta table linkDelta data types link
1 More Replies