ApplicationAttemptNotFoundException: Spark Application Stuck in ACCEPTED state on YARN
Last updated:Table of Contents
One of the reasons for this error is lack of disk space on the master node.
Is the disk almost full?
$ hadoop fs -df -h
Filesystem Size Used Available Use%
hdfs://ip-172-16-1-188.ec2.internal:8020 143.9 G 114.2 G 17.7 G 79%
Which directory is responsible?
In my case it's /var/log/spark/apps
$ hadoop fs -du -h /var/log/spark
108.2 G /var/log/spark/apps
This may happen if you run many applications in the same cluster.
Just delete everything under /var/log/spark/apps/
but not the directory itself:
$ hadoop fs -rm -r -f /var/log/spark/apps/
$ hadoop fs -mkdir /var/log/spark/apps/