cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

User16826994223
by Honored Contributor III
  • 424 Views
  • 1 replies
  • 0 kudos

Resolved! Does Koalas support Structured Streaming

Does Koalas support Structured Streaming

  • 424 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

No, Koalas does not support Structured Streaming officially.As a workaround, you can use Koalas APIs with foreachBatch in Structured Streaming which allows batch APIs:>>> def func(batch_df, batch_id):   ... koalas_df = ks.DataFrame(batch_df)   .....

  • 0 kudos
christys
by Community Manager
  • 385 Views
  • 1 replies
  • 2 kudos
  • 385 Views
  • 1 replies
  • 2 kudos
Latest Reply
Taha
New Contributor III
  • 2 kudos

So if you've got an S3 bucket with your data in it, the first thing you'll need to do is connect it to a Databricks workspace to grant access. Then you can start querying the contents of the bucket from notebooks (or running jobs) by using clusters (...

  • 2 kudos
Anonymous
by Not applicable
  • 358 Views
  • 1 replies
  • 0 kudos
  • 358 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Frequency of logs in stdout / stderr would be a function of what code you run on the databricks clusters .The default log level for log4j is INFO - you could change it following the instructions here

  • 0 kudos
Digan_Parikh
by Valued Contributor
  • 717 Views
  • 1 replies
  • 0 kudos

Resolved! DBSQL connection to other BI tools

How do i connect DBSQL to other BI tools?

  • 717 Views
  • 1 replies
  • 0 kudos
Latest Reply
Digan_Parikh
Valued Contributor
  • 0 kudos

Generally, you can connect to SQL endpoint using a ODBC or JDBC driver. More information can be found here. https://docs.databricks.com/integrations/bi/index-sqla.html

  • 0 kudos
User16826992666
by Valued Contributor
  • 917 Views
  • 2 replies
  • 0 kudos
  • 917 Views
  • 2 replies
  • 0 kudos
Latest Reply
Digan_Parikh
Valued Contributor
  • 0 kudos

When sizing, this is the recommendation. Data set Cluster SizeITB / rows X-Large+500GB / 1B rows X-LargeSOGB / IOOM+ rows LargeIOOGB / rows MediumIOGB / -M rows SmallThis table maps SQL endpoint cluster sizes to Databricks cluster driver sizes and wo...

  • 0 kudos
1 More Replies
User16826987838
by Contributor
  • 464 Views
  • 1 replies
  • 0 kudos
  • 464 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

Second question. Yes. Just don't grant CAN_RUN to a user/grouphttps://docs.databricks.com/sql/user/security/access-control/dashboard-acl.html#dashboard-permissions

  • 0 kudos
User16826992666
by Valued Contributor
  • 2881 Views
  • 1 replies
  • 0 kudos

Resolved! Can I implement Row Level Security for users when using SQL Endpoints?

I'd like to be able to limit the rows users see when querying tables in Databricks SQL based on what access level each user is supposed to be granted. Is this possible in the SQL environment?

  • 2881 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Using dynamic views you can specify permissions down to the row or field level e.g. CREATE VIEW sales_redacted AS SELECT user_id, country, product, total FROM sales_raw WHERE CASE WHEN is_member('managers') THEN TRUE ELSE total <= 1...

  • 0 kudos
User16826992666
by Valued Contributor
  • 731 Views
  • 1 replies
  • 0 kudos

Resolved! Should I enable Photon on my SQL Endpoint?

I see the option to enable Photon when creating a new SQL Endpoint. The description says that enabling it helps speed up up queries, which sounds good, but are there any downsides I need to be aware of?

  • 731 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Honored Contributor III
  • 0 kudos

Generally, yes you should enable photon. The majority of functionality is available and will perform extremely well. There are some limitations with it that can be found here. Limitations: Works on Delta and Parquet tables only for both read and writ...

  • 0 kudos
User16826992666
by Valued Contributor
  • 967 Views
  • 1 replies
  • 0 kudos

Resolved! How can I see the performance of individual queries in Databricks SQL?

If I want to get more information about how an individual query is performing in the Databricks SQL environment, is there anywhere I can see that?

  • 967 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

You could see details on different queries that ran against an endpoint under the query history section

  • 0 kudos
User16826992666
by Valued Contributor
  • 4259 Views
  • 1 replies
  • 0 kudos

Resolved! Can you run Structured Streaming on a job cluster?

Need to know if I can use job clusters to start and run streaming jobs or if it has to be interactive

  • 4259 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Yes. Here is a doc containing some info on running Structured Streaming in production using Databricks jobs

  • 0 kudos
User16788316451
by New Contributor II
  • 447 Views
  • 1 replies
  • 0 kudos

How to troubleshoot SSL certificate errors while connecting Business Intelligence (BI) tools to Databricks in a Private Cloud (PVC) environment?

How to troubleshoot SSL certificate errors while connecting Business Intelligence (BI) tools to Databricks in a Private Cloud (PVC) environment?

  • 447 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16788316451
New Contributor II
  • 0 kudos

See attached for steps to inspect the certificate chain using openssl

  • 0 kudos
Srikanth_Gupta_
by Valued Contributor
  • 1500 Views
  • 1 replies
  • 1 kudos

Reading bulk CSV files from Spark

While trying to read 100GB of csv.gz file from Spark which is taking forever to read, what are best options to read this file faster?

  • 1500 Views
  • 1 replies
  • 1 kudos
Latest Reply
sean_owen
Honored Contributor II
  • 1 kudos

Part of the problem here is that .gz files are not splittable. If you have 1 huge 100GB .gz file, it can only be processed by one task. Can you change your input to use a splittable compression like .bz2? it'll work much better.

  • 1 kudos
User16826992666
by Valued Contributor
  • 792 Views
  • 1 replies
  • 0 kudos
  • 792 Views
  • 1 replies
  • 0 kudos
Latest Reply
sean_owen
Honored Contributor II
  • 0 kudos

No. If you use %pip or %conda to attach a library, then it will only affect the execution of the notebook. A separate virtualenv is created for each notebook and its dependencies, even on a shared cluster.If you create a Library in the workspace and ...

  • 0 kudos
User16753724663
by Valued Contributor
  • 3658 Views
  • 1 replies
  • 0 kudos

Unable to use JDBC/ODBC url with sql workbench

SQL Workbench is not able to connect to Cluster using JDBC/ODBC connection. Getting the following error. I used the configuration provided by the cluster (jdbc:spark://<host>.cloud.databricks.com:443/default;transportMode=http;ssl=1;httpPath=sql/prot...

  • 3658 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16753724663
Valued Contributor
  • 0 kudos

As we are getting 401 error that means an authentication issue. We should use Personal access token for password.The username should be "token" and the password should be PAT token.

  • 0 kudos
Labels
Top Kudoed Authors