cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

Full Memory Utilization

harishgehlot_03
New Contributor III

Hi Databricks Community. I need some suggestions on my issue. Basically we are using databricks asset bundle to deploy our forecasting repo and using aws nodes to run the forecast jobs. We built proper workflow.yml file to trigger the jobs.

  • I am using single node cluster because currently our forecasting module is pandas based only (no spark or distribution but we are using joblib parallel).
  • Right now we've used r6i.xlarge node which is (32 GB & 4 cores). When we are running using this node, our code is do utilizing 28 - 30 GB and keeping remaining as free. This job took 15 hours to complete.

harishgehlot_03_0-1750165393984.png

  • Now, I've switched to r6i.4xlarge (128 GB & 64 cores) and I am expecting, it will run more faster as early with r6i.xlarge, BUT WHAT I OBSERVED is it's still taking around 30-31 GB only and other 90 GB is free. What I am expecting is it should expand and completes the job more faster.

harishgehlot_03_1-1750165586159.png

 

Below is my workflow and cluster settings being used. Let me know if there is something needs to be change or tuned. Tagging @Shua42 , because you also helped me before. Thanks in advance.

  dev:
    resources:
      clusters:
        dev_cluster: &dev_cluster
          num_workers: 0
          kind: CLASSIC_PREVIEW
          is_single_node: true
          spark_version: 14.3.x-scala2.12
          node_type_id: r6i.4xlarge
          custom_tags:
            clusterSource: ts-forecasting-2
            ResourceClass: SingleNode
          data_security_mode: SINGLE_USER
          enable_elastic_disk: true
          enable_local_disk_encryption: false
          autotermination_minutes: 20
          docker_image:
            url: "*****.amazonaws.com/dev-databricks:retailforecasting-latest"
          aws_attributes:
            availability: SPOT
            instance_profile_arn: ****
            ebs_volume_type: GENERAL_PURPOSE_SSD
            ebs_volume_count: 1
            ebs_volume_size: 50
          spark_conf:
            spark.databricks.cluster.profile: singleNode
            spark.memory.offHeap.enabled: false
            spark.driver.memory: 4g

 

1 ACCEPTED SOLUTION

Accepted Solutions

Hi @Raghavan93513, thanks for responding. Time taken by second case is ~14 hours.

View solution in original post

3 REPLIES 3

Raghavan93513
Databricks Employee
Databricks Employee

Hi @harishgehlot_03 
Good day!

May I know what the time was in the second case using a r6i.4xlarge instance type?

Hi @Raghavan93513, thanks for responding. Time taken by second case is ~14 hours.

Hi @Raghavan93513 , Let me know if any spark.conf I can set or something else which will help me to utilize more proportion of memory instead of limiting itself. Note: this is pandas workflow (not using spark till now)

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now