Accepted answer

Please look at the option "es.nodes.wan.only". By default, the value for this key is set to "false", and when I set it to true, that exception went away. Here is the current documentation for the configuration values:

val conf = new org.apache.spark.SparkConf()
 .set("es.nodes", "")
 .set("es.nodes.wan.only", "true")

Note that the doc specifies to flip this value to true for environments like those on AWS, but this exception happened for me when attempting to point to a VM with Elasticsearch running.

Related Query

More Query from same tag