score:0

you are missing the configuration for your spark driver. this causes the operator to not be able to run any workloads, and ultimately all the processes are hogged up. you need to take your spark driver configuration, convert into a configmap resource and make sure this configmap is applied to the same namespace as your spark driver pods are running before you run any pods.

the pod scheduler makes sure that any volumes, secrets and configmaps etc are available before it schedules the pods to run, that's why you're observing this.

here is an example from google's spark operator repository on how to write a spark driver configuration as a configmap.

simply take all the --conf parameters from your command and change them to a configmap and apply it. it should work.

note: since the pod controller will poll for the configmap availability periodically on its own, once you apply the configmap, with the correct name, the pod should start working.


Related Query

More Query from same tag