score:1

Accepted answer

Please look at the option "es.nodes.wan.only". By default, the value for this key is set to "false", and when I set it to true, that exception went away. Here is the current documentation for the configuration values: https://www.elastic.co/guide/en/elasticsearch/hadoop/current/configuration.html.

val conf = new org.apache.spark.SparkConf()
 .setMaster("local[*]")
 .setAppName("es-example")
 .set("es.nodes", "search-2meoihmu.us-est-1.es.amazonaws.com")
 .set("es.nodes.wan.only", "true")

Note that the doc specifies to flip this value to true for environments like those on AWS, but this exception happened for me when attempting to point to a VM with Elasticsearch running.


Related Query

More Query from same tag