cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How can we run scala in a jupyter notebook?

Kaniz
Community Manager
Community Manager
 
1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz
Community Manager
Community Manager

Step1: install the package

pip install spylon-kernel

Step2: create a kernel spec

This will allow us to select the Scala kernel in the notebook.

python -m spylon_kernel install

Step3: start the jupyter notebook

ipython notebook

And in the notebook we select 

New -> spylon-kernel

This will start our Scala kernel.

Step4: testing the notebook

Letโ€™s write some Scala code:

val x = 2
 
val y = 3
 
x+y
 

The output should be something similar to the result in the below image.

image" data-fileid="0698Y00000JFV3YQAXAs you can see, it also starts the spark components. For this, please make sure you have SPARK_HOME set up.

Now we can even use spark. Letโ€™s test it by creating a data set:

val data = Seq((1,2,3), (4,5,6), (6,7,8), (9,19,10))
val ds = spark.createDataset(data)
ds.show()

This should output a simple data frame:

image" data-fileid="0698Y00000JFV4MQAXAnd we can even use python in this kernel using the command 

%python
 
 :
 
%%python
 
x=2
 
print(x)

For more info, you can visit the spylon-kernel Github page.

The notebook with the code above is available here.

View solution in original post

1 REPLY 1

Kaniz
Community Manager
Community Manager

Step1: install the package

pip install spylon-kernel

Step2: create a kernel spec

This will allow us to select the Scala kernel in the notebook.

python -m spylon_kernel install

Step3: start the jupyter notebook

ipython notebook

And in the notebook we select 

New -> spylon-kernel

This will start our Scala kernel.

Step4: testing the notebook

Letโ€™s write some Scala code:

val x = 2
 
val y = 3
 
x+y
 

The output should be something similar to the result in the below image.

image" data-fileid="0698Y00000JFV3YQAXAs you can see, it also starts the spark components. For this, please make sure you have SPARK_HOME set up.

Now we can even use spark. Letโ€™s test it by creating a data set:

val data = Seq((1,2,3), (4,5,6), (6,7,8), (9,19,10))
val ds = spark.createDataset(data)
ds.show()

This should output a simple data frame:

image" data-fileid="0698Y00000JFV4MQAXAnd we can even use python in this kernel using the command 

%python
 
 :
 
%%python
 
x=2
 
print(x)

For more info, you can visit the spylon-kernel Github page.

The notebook with the code above is available here.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.