在我们布置好Hadoop环境后我们做的第一件就是 执行 start-all.sh 命令。 那么这个命令都做了什么,今天来探讨一下这个。

start-all.sh内容

Start all hadoop daemons.  Run this on master node.

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/hadoop-config.sh

start dfs daemons

"$bin"/start-dfs.sh --config $HADOOP_CONF_DIR

start mapred daemons

"$bin"/start-mapred.sh --config $HADOOP_CONF_DIR

从上面可以看出来这个脚本,执行了三个脚本,分别是: hadoop-config.sh start-dfs.sh start-mapred.sh

hadoop-config.sh 内容

this="$0"
while [ -h "$this" ]; do
  ls=`ls -ld "$this"`
  link=`expr "$ls" : '.*-> \(.*\)$'`
  if expr "$link" : '.*/.*' > /dev/null; then
    this="$link"
  else
    this=`dirname "$this"`/"$link"
  fi
done

convert relative path to absolute path

bin=`dirname "$this"`
script=`basename "$this"`
bin=`cd "$bin"; pwd`
this="$bin/$script"
bin=`dirname "$this"`
script=`basename "$this"`
bin=`cd "$bin"; pwd`
this="$bin/$script"

the root of the Hadoop installation

export HADOOP_HOME=`dirname "$this"`/..

#check to see if the conf dir is given as an optional argument
if [ $# -gt 1 ]
then
    if [ "--config" = "$1" ]
	  then
	      shift
	      confdir=$1
	      shift
	      HADOOP_CONF_DIR=$confdir
    fi
fi

Allow alternate conf dir location.

HADOOP_CONF_DIR="${HADOOP_CONF_DIR:-$HADOOP_HOME/conf}"

#check to see it is specified whether to use the slaves or the

masters file

if [ $# -gt 1 ]
then
    if [ "--hosts" = "$1" ]
    then
        shift
        slavesfile=$1
        shift
        export HADOOP_SLAVES="${HADOOP_CONF_DIR}/$slavesfile"
    fi
fi

这里都有注释,就不解释了。

start-dfs.sh

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/hadoop-config.sh

get arguments

if [ $# -ge 1 ]; then
	nameStartOpt=$1
	shift
	case $nameStartOpt in
	  (-upgrade)
	  	;;
	  (-rollback) 
	  	dataStartOpt=$nameStartOpt
	  	;;
	  (*)
		  echo $usage
		  exit 1
	    ;;
	esac
fi

这里首先会执行hadoop-config.sh,然后在执行dfs的一些初始化。

start-mapred.sh

bin=`dirname "$0"`
bin=`cd "$bin"; pwd`

. "$bin"/hadoop-config.sh

start mapred daemons

start jobtracker first to minimize connection errors at startup

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start jobtracker
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start tasktracker

这里首先会执行hadoop-config.sh,然后在执行mapred的一些初始化。

存个坑。等会在弄。和下文有出入,没看见调用hadoop-env.sh https://www.cnblogs.com/wolfblogs/p/4147485.html