最近突然對spark的spark-shell發生了興趣
它是如何啟動scala的REPL的,並且在此前寫入了常用的環境變量的呢?
通過查看spark的源碼,找到了SparkILoop.scala
import scala.tools.nsc.interpreter.{JPrintWriter, ILoop}
/**
* A Spark-specific interactive shell.
*/
class SparkILoop(in0: Option[BufferedReader], out: JPrintWriter)
extends ILoop(in0, out) {
def this(in0: BufferedReader, out: JPrintWriter) = this(Some(in0), out)
def this() = this(None, new JPrintWriter(Console.out, true))
def initializeSpark() {
intp.beQuietDuring {
processLine("""
@transient val sc = {
val _sc = org.apache.spark.repl.Main.createSparkContext()
println("Spark context available as sc.")
_sc
}
""")
processLine("""
@transient val sqlContext = {
val _sqlContext = org.apache.spark.repl.Main.createSQLContext()
println("SQL context available as sqlContext.")
_sqlContext
}
""")
processLine("import org.apache.spark.SparkContext._")
processLine("import sqlContext.implicits._")
processLine("import sqlContext.sql")
processLine("import org.apache.spark.sql.functions._")
}
}
...
}
可以看出SparkILoop繼承自scala.tools.nsc.interpreter.ILoop
緊接著著看了ILoop的api doc
終於找到了啟動ILoop的方法:
import scala.tools.nsc.interpreter.ILoop
import scala.tools.nsc.Settings
val loop = new ILoop
loop.process(new Settings)
使用IntelliJ IDEA編寫Scala在Spark中運行 http://www.linuxidc.com/Linux/2015-08/122283.htm
Scala編程完整中文版 PDF http://www.linuxidc.com/Linux/2015-08/121033.htm
快學Scala (中文完整掃描版) PDF http://www.linuxidc.com/Linux/2015-08/120950.htm
Ubuntu 安裝 2.10.x版本的Scala http://www.linuxidc.com/Linux/2015-04/116455.htm
Spark1.0.0部署指南 http://www.linuxidc.com/Linux/2014-07/104304.htm
CentOS 6.2(64位)下安裝Spark0.8.0詳細記錄 http://www.linuxidc.com/Linux/2014-06/102583.htm
Spark簡介及其在Ubuntu下的安裝使用 http://www.linuxidc.com/Linux/2013-08/88606.htm
安裝Spark集群(在CentOS上) http://www.linuxidc.com/Linux/2013-08/88599.htm
Hadoop vs Spark性能對比 http://www.linuxidc.com/Linux/2013-08/88597.htm
Spark安裝與學習 http://www.linuxidc.com/Linux/2013-08/88596.htm
Spark 並行計算模型 http://www.linuxidc.com/Linux/2012-12/76490.htm
Scala 的詳細介紹:請點這裡
Scala 的下載地址:請點這裡