Spark (< v1.6) uses Akka underneath. So does Play. You should be able to write a Spark action as an actor that communicates with a receiving actor in the Play system (that you also write).

You can let Akka worry about de/serialization, which will work as long as both systems have the same class definitions on their classpaths.

If you want to go further than that, you can write Akka Streams code that tees the data stream to your Play application.


check this link out, you need to run spark in local mode (on your web server) and the offline ML model should be saved in S3 so you can access the model then from web app and cache the model jut once and you will be having spark context running in local mode continuously .

Also another approach is to use Livy (REST API calls on spark)

the s3 option is the way forward i guess, if the batch model changes you need to refresh the website cache (down time) for few minutes.

look into these links

Thanks Sri


Spark is a fast large scale data processing platform. The key here is large scale data. In most cases, the time to process that data will not be sufficiently fast to meet the expectations of your average web app user. It is far better practice to perform the processing offline and write the results of your Spark processing to e.g a database. Your web app can then efficiently retrieve those results by querying that database.

That being said, spark job server server provides a REST api for submitting Spark jobs.


One of the issues you will run into with Spark is it takes some time to start up and build a SparkContext. If you want to do Spark queries via web calls, it will not be practical to fire up spark-submit every time. Instead, you will want to turn your driver application (these terms will make more sense later) into an RPC server.

In my application I am embedding a web server (http4s) so I can do XmlHttpRequests in JavaScript to directly query my application, which will return JSON objects.

Related Query

More Query from same tag