score:0
Add an example of @Fountaine007's first bullet.
I ran into the same issue and it's because the allocated vcores is less than the application's expectation.
For my specific scenario, I increased the value of yarn.nodemanager.resource.cpu-vcores
under $HADOOP_HOME/etc/hadoop/yarn-site.xml
.
For memory related issue, you may also need to modify yarn.nodemanager.resource.memory-mb
.
score:6
There are two known reasons for this:
Your application requires more resources (cores, memory) than allocated. Increasing worker cores and memory should solve it. Most other answers focus on this.
Where less known, the firewall is blocking the communication between master and workers. This could happen especially you are using cloud service. According to Spark Security, besides the standard 8080, 8081, 7077, 4040 ports, you also need to make sure the master and worker can communicate via the
SPARK_WORKER_PORT
,spark.driver.port
andspark.blockManager.port
; the latter three are used by submitting jobs and are randomly assigned by the program (if left unconfigured). You may try to open all ports to run a quick test.
Source: stackoverflow.com
Related Query
- Spark error: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
- Initial job has not accepted any resources; Error with spark in VMs
- Mahout spark-shell Initial job has not accepted any resources
- Apache Spark Standalone Cluster Initial Job not accepting resources
- TaskSchedulerImpl: Initial job has not accepted any resources. (Error in Spark)
- Spark : check your cluster UI to ensure that workers are registered
- Getting Error : There is no simulation script. Please check that your scripts are in user-files/ simulations Press any key to continue . .
- Error: A JNI error has occurred, please check your installation and try again in IntelliJ IDEA for Scala-Spark Program using SBT
- Spark in cluster mode throws error if a SparkContext is not started
- "User did not initialize spark context" Error when using Scala code in SPARK YARN Cluster mode
- Spark Graphx : class not found error on EMR cluster
- Possible serialization error when executing Spark job from cluster with 1 master and 2 workers
- Spark - Error "A master URL must be set in your configuration" when submitting an app
- Apache IVY error message? : impossible to get artifacts when data has not been loaded
- Running a Job on Spark 0.9.0 throws error
- spark error RDD type not found when creating RDD
- Error in running job on Spark 1.4.0 with Jackson module with ScalaObjectMapper
- Error in running Spark in Intellij : "object apache is not a member of package org"
- java.lang.NoClassDefFoundError: Could not initialize class when launching spark job via spark-submit in scala code
- Optimize Spark job that has to calculate each to each entry similarity and output top N similar items for each
- Spark write parquet not writing any files, only _SUCCESS
- Spark Scala: Task Not serializable error
- Spark streaming is not working in Standalone cluster deployed in VM
- noclassdeffounderror: could not initialize yyy class after deploying on spark standalone cluster
- The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached
- Submit Spark job on Yarn cluster
- Spark (scala) dataframes - Check whether strings in column contain any items from a set
- java.io.EOFException on Spark EC2 Cluster when submitting job programatically
- java.lang.String is not a valid external type for schema of int error in creating spark dataframe
- spark job cassandra error
More Query from same tag
- Implicit parameters not found
- Spark - SQL : value implicits is not a member of org.apache.spark.sql.SQLContext
- Does spark supports multiple output file with parquet format
- Install a .jar in Spark
- com.arangodb.velocypack.exception.VPackParserException: java.lang.InstantiationException
- Apache Spark Scala CosmosDB Connector Writing DataFrame Back To Database
- What might def manyApp = xs => xs in scala?
- pattern match,"::" unapply confusion
- In Scala, what are the rules for making closures with underscores?
- How to write a function that takes a list of column names of a DataFrame, reorders selected columns the left and preserves unselected columns
- How to fix initialization of Logger error while using spark-submit command
- Scala : How to create Unit test case in Intellij
- How to decode null to empty string in Circe
- Can we set String column as partitionColumn?
- Publishing and Consuming Scala / Java case classes via Kafka
- Design pattern to set certain fields in object based on enum values
- Is it possible to extend the Scala enumeration value type?
- Two types separated by a colon in Scala
- How do I get Eclipse to see Scala sources in a jar?
- Play2.0 Scala: Extracting data from config as a String
- Calling Functions with parameters from CMD
- Scala. How to delete all n'th element from Stream
- Scala coding style
- Inheriting self-typed trait with another trait in Scala
- Rename nested struct columns in a Spark DataFrame
- Spark dataframe add a row for every existing row
- How Akka cluster application health works?
- Returning value from Scala future completion
- Why is Scala's Set.subsets() method defined as an empty-paren method?
- Invert a Scala Future