score:1
Spark (< v1.6) uses Akka underneath. So does Play. You should be able to write a Spark action as an actor that communicates with a receiving actor in the Play system (that you also write).
You can let Akka worry about de/serialization, which will work as long as both systems have the same class definitions on their classpaths.
If you want to go further than that, you can write Akka Streams code that tees the data stream to your Play application.
score:1
check this link out, you need to run spark in local mode (on your web server) and the offline ML model should be saved in S3 so you can access the model then from web app and cache the model jut once and you will be having spark context running in local mode continuously .
https://commitlogs.com/2017/02/18/serve-spark-ml-model-using-play-framework-and-s3/
Also another approach is to use Livy (REST API calls on spark)
https://index.scala-lang.org/luqmansahaf/play-livy-module/play-livy/1.0?target=_2.11
the s3 option is the way forward i guess, if the batch model changes you need to refresh the website cache (down time) for few minutes.
look into these links
https://github.com/openforce/spark-mllib-scala-play/blob/master/app/modules/SparkUtil.scala
https://github.com/openforce/spark-mllib-scala-play
Thanks Sri
score:3
Spark is a fast large scale data processing platform. The key here is large scale data. In most cases, the time to process that data will not be sufficiently fast to meet the expectations of your average web app user. It is far better practice to perform the processing offline and write the results of your Spark processing to e.g a database. Your web app can then efficiently retrieve those results by querying that database.
That being said, spark job server server provides a REST api for submitting Spark jobs.
score:4
One of the issues you will run into with Spark is it takes some time to start up and build a SparkContext. If you want to do Spark queries via web calls, it will not be practical to fire up spark-submit every time. Instead, you will want to turn your driver application (these terms will make more sense later) into an RPC server.
In my application I am embedding a web server (http4s) so I can do XmlHttpRequests in JavaScript to directly query my application, which will return JSON objects.
Source: stackoverflow.com
Related Query
- How i can integrate Apache Spark with the Play Framework to display predictions in real time?
- How can I create a submit button with parameter in the play framework
- How to use s3 with Apache spark 2.2 in the Spark shell
- How can I obtain the DAG of an Apache Spark job without running it?
- How to use the Play Framework with Google App Engine with locally installed Java 7?
- How can I use the Play Framework in a multi-project?
- How to create an instance of a model with the ebean framework and scala in Play 2.2
- How to read and write Anorm object with the new JSON API in Play Framework 2.1-RC2?
- With Scala 2.10.2, SBT 0.13.0, Specs2 & Play Framework 2.2.1 how can I control logging whilst running tests?
- In Apache Spark how can I group all the rows of an RDD by two shared values?
- How can I handle decimal numbers using the Scala framework play
- How can find size of each Row in Apache spark sql dataframe and discrad the rows having size more than a threshold size in Kilobyte
- Play Framework Enumerator : How can you zip enumerator with Index
- How do you set the viewport, width, with phantomjs with play 2 framework
- How can I install New Relic in Heroku Play Framework without adding the agent to my git repository?
- Using Gradle, how can I list just the JSON library of the Play framework as a dependency?
- How can I render the current controller and action name in the view of Play Framework 2?
- How can I set a standard value in a textarea in Play Framework 2.1 using the helper object?
- How can I explain the Apache Spark RDD Lineage Graph?
- play framework - how can i call this function for the authenticated user in this code play2 scala zentasks
- How to get the occurence rate of the specific values with Apache Spark
- How can I optimize the spark function to replace nulls with zeroes?
- Can not understand how Spark let python run at Yarn? How does the ProcessBuilder deal with zip file?
- How to use Memcached with the Scala Play Framework 2.2?
- How can I pass a complex external variable, say a map's value to an UDF from the driver program in Spark with Java?
- How to use s3a with Apache spark 2.2(hadoop 2.8) in the Spark Submit?
- How to organise the entity model for play framework 2 when working with anorm
- How to integrate Play Framework with PostgreSQL to execute Applications?
- Using Play framework with Scala, how can I humanize a Json Validation message?
- How can I use in java the filtering function of the play framework 2 JsMessages plugin?
More Query from same tag
- Spark scala - convert an array into values from the same table in a Hierarchy type table
- Getting org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on in par
- simple matrix multiplication in Spark
- How to solve 'Cannot start Scala compile server' for Intellij Idea?
- Add two tuples containing simple elements in Scala
- How to read CSV files with header definition in a separate file?
- Spark/Scala approximate group by
- Scala-Z3: How to perform member access on objects
- scala project Compile error with IntelliJ community edition
- Curried apply in trait companion object
- Counting the occurrences of a word in a Mongo document in Scala
- Spark: scala.MatchError (of class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema
- Get All the instance subclass of trait using Google Guice
- Weird behavior in Scala
- How to do Groupby and map in one go in python just like we do in scala
- How to import several implicit at once?
- Implicit conversion from List[Int] to List[Double] fails
- Spark : How do I find the passengers who have been on more than 3 flights together
- Scala - implicit macros & materialisation
- Deflating objects in specs2 (scala) and checking order of elements in sequence
- scala split single row to multiple rows based on time column
- Nested tuple extractor in scala ->
- Apache Spark Handling Skewed Data
- Always execute code at beginning and end of method
- What is the non-flattened Scala Vector of Sets: (1 to 2).flatMap((1 to 3).toSet.subsets(_))?
- Lifecyle of an actor in actor model
- How to execute grep inside sbt
- Play JSON: turn a Seq[Reads[JsObject]] into one Reads[JsObject]
- What are type lambdas in Scala and what are their benefits?
- Is Akka Ask Blocking on the Current Thread