1 什麼是線性回歸
線性回歸是另一個傳統的有監督機器學習算法。在這個問題中,每個實體與一個實數值的標簽 (而不是一個像在二元分類的0,1標簽),和我們想要預測標簽盡可能給出數值代表實體特征。MLlib支持線性回歸以及L2(ridge)和L1(lasso)正則化參數調整。Mllib還有一個回歸算法,原始梯度下降(在下面描述),和上面描述的有相同的參數二元分類算法。
可用線性回歸算法:
LinearRegressionWithSGD
RidgeRegressionWithSGD
LassoWithSGD。
注意:
(1)因為是線性回歸,所以學習到的函數為線性函數,即直線函數;
(2)因為是單變量,因此只有一個x;
我們能夠給出單變量線性回歸的模型:
我們常稱x為feature,h(x)為hypothesis;
2. Gradient Descent(梯度下降)
但是又一個問題引出了,雖然給定一個函數,我們能夠根據cost function知道這個函數擬合的好不好,但是畢竟函數有這麼多,總不可能一個一個試吧?
因此我們引出了梯度下降:能夠找出cost function函數的最小值;
梯度下降原理:將函數比作一座山,我們站在某個山坡上,往四周看,從哪個方向向下走一小步,能夠下降的最快;
當然解決問題的方法有很多,梯度下降只是其中一個,還有一種方法叫Normal Equation;
方法:
(1)先確定向下一步的步伐大小,我們稱為Learning rate;
(2)任意給定一個初始值:;
(3)確定一個向下的方向,並向下走預先規定的步伐,並更新;
(4)當下降的高度小於某個定義的值,則停止下降;
算法:
特點:
(1)初始點不同,獲得的最小值也不同,因此梯度下降求得的只是局部最小值;
(2)越接近最小值時,下降速度越慢;
3.線性回歸代碼
下面的示例演示如何加載訓練數據,把它解析成 LabeledPoint的RDD對象(彈性分布式數據集)。這個例子然後使用LinearRegressionWithSGD構建一個簡單的線性模型來預測標簽值。最後我們計算均方誤差對擬合優度進行評估。
執行結果如下:
root@master scala]# sbt/sbt package run [info] Set current project to scala (in build file:/root/sample/scala/) [success] Total time: 2 s, completed Feb 17, 2014 9:53:53 PM [info] Running SimpleApp log4j:WARN No appenders could be found for logger (akka.event.slf4j.Slf4jLogger). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 14/02/17 21:53:55 INFO SparkEnv: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 14/02/17 21:53:55 INFO SparkEnv: Registering BlockManagerMaster 14/02/17 21:53:55 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140217215355-b441 14/02/17 21:53:55 INFO MemoryStore: MemoryStore started with capacity 580.0 MB. 14/02/17 21:53:55 INFO ConnectionManager: Bound socket to port 45162 with id = ConnectionManagerId(master,45162) 14/02/17 21:53:55 INFO BlockManagerMaster: Trying to register BlockManager 14/02/17 21:53:55 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager master:45162 with 580.0 MB RAM 14/02/17 21:53:55 INFO BlockManagerMaster: Registered BlockManager 14/02/17 21:53:55 INFO HttpServer: Starting HTTP Server 14/02/17 21:53:56 INFO HttpBroadcast: Broadcast server started at http://192.168.159.129:54817 14/02/17 21:53:56 INFO SparkEnv: Registering MapOutputTracker 14/02/17 21:53:56 INFO HttpFileServer: HTTP File server directory is /tmp/spark-b1b6ca47-4f04-4a60-8cb5-4b7151c6e9a2 14/02/17 21:53:56 INFO HttpServer: Starting HTTP Server 14/02/17 21:53:56 INFO SparkUI: Started Spark Web UI at http://master:4040 14/02/17 21:53:57 WARN NativeCodeLoader: Unable to load native-Hadoop library for your platform... using builtin-java classes where applicable 14/02/17 21:53:57 INFO SparkContext: Added JAR target/scala-2.10/scala_2.10-0.1-SNAPSHOT.jar at http://192.168.159.129:47898/jars/scala_2.10-0.1-SNAPSHOT.jar with timestamp 1392645237384 14/02/17 21:53:57 INFO AppClient$ClientActor: Connecting to master spark://192.168.159.129:7077... 14/02/17 21:53:58 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes 14/02/17 21:53:58 INFO MemoryStore: ensureFreeSpace(132636) called with curMem=0, maxMem=608187187 14/02/17 21:53:58 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 129.5 KB, free 579.9 MB) 14/02/17 21:53:59 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20140217215359-0003 14/02/17 21:53:59 INFO AppClient$ClientActor: Executor added: app-20140217215359-0003/0 on worker-20140217214342-master-54909 (master:54909) with 1 cores 14/02/17 21:53:59 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140217215359-0003/0 on hostPort master:54909 with 1 cores, 512.0 MB RAM 14/02/17 21:53:59 INFO AppClient$ClientActor: Executor added: app-20140217215359-0003/1 on worker-20140217214339-slaver02-52414 (slaver02:52414) with 1 cores 14/02/17 21:53:59 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140217215359-0003/1 on hostPort slaver02:52414 with 1 cores, 512.0 MB RAM 14/02/17 21:53:59 INFO AppClient$ClientActor: Executor added: app-20140217215359-0003/2 on worker-20140217214341-slaver01-34119 (slaver01:34119) with 1 cores 14/02/17 21:53:59 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140217215359-0003/2 on hostPort slaver01:34119 with 1 cores, 512.0 MB RAM 14/02/17 21:53:59 INFO AppClient$ClientActor: Executor updated: app-20140217215359-0003/1 is now RUNNING 14/02/17 21:53:59 INFO AppClient$ClientActor: Executor updated: app-20140217215359-0003/2 is now RUNNING 14/02/17 21:53:59 INFO AppClient$ClientActor: Executor updated: app-20140217215359-0003/0 is now RUNNING 14/02/17 21:54:02 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@slaver02:42172/user/Executor#1991133218] with ID 1 14/02/17 21:54:03 INFO FileInputFormat: Total input paths to process : 1 14/02/17 21:54:03 INFO SparkContext: Starting job: first at GeneralizedLinearAlgorithm.scala:121 14/02/17 21:54:03 INFO DAGScheduler: Got job 0 (first at GeneralizedLinearAlgorithm.scala:121) with 1 output partitions (allowLocal=true) 14/02/17 21:54:03 INFO DAGScheduler: Final stage: Stage 0 (first at GeneralizedLinearAlgorithm.scala:121) 14/02/17 21:54:03 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:03 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:03 INFO DAGScheduler: Computing the requested partition locally 14/02/17 21:54:03 INFO HadoopRDD: Input split: hdfs://master:9000/mllib/lpsa.data:0+5197 14/02/17 21:54:03 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager slaver02:33696 with 297.0 MB RAM 14/02/17 21:54:04 INFO SparkContext: Job finished: first at GeneralizedLinearAlgorithm.scala:121, took 0.703681387 s 14/02/17 21:54:04 INFO SparkContext: Starting job: count at GradientDescent.scala:137 14/02/17 21:54:04 INFO DAGScheduler: Got job 1 (count at GradientDescent.scala:137) with 2 output partitions (allowLocal=false) 14/02/17 21:54:04 INFO DAGScheduler: Final stage: Stage 1 (count at GradientDescent.scala:137) 14/02/17 21:54:04 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:04 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:04 INFO DAGScheduler: Submitting Stage 1 (MappedRDD[3] at map at GeneralizedLinearAlgorithm.scala:139), which has no missing parents 14/02/17 21:54:04 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (MappedRDD[3] at map at GeneralizedLinearAlgorithm.scala:139) 14/02/17 21:54:04 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks 14/02/17 21:54:04 INFO TaskSetManager: Starting task 1.0:0 as TID 0 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:04 INFO TaskSetManager: Serialized task 1.0:0 as 1749 bytes in 24 ms 14/02/17 21:54:32 INFO DAGScheduler: Got job 14 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false) 14/02/17 21:54:32 INFO DAGScheduler: Final stage: Stage 14 (reduce at GradientDescent.scala:150) 14/02/17 21:54:32 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 14 (MappedRDD[29] at map at GradientDescent.scala:145), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 14 (MappedRDD[29] at map at GradientDescent.scala:145) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 14.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 14.0:0 as TID 26 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 14.0:0 as 2419 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 14.0:1 as TID 27 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 14.0:1 as 2419 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 26 in 35 ms on slaver01 (progress: 0/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(14, 0) 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 27 in 85 ms on slaver02 (progress: 1/2) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 14.0 from pool 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(14, 1) 14/02/17 21:54:33 INFO DAGScheduler: Stage 14 (reduce at GradientDescent.scala:150) finished in 0.082 s 14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.098827167 s 14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150 14/02/17 21:54:33 INFO DAGScheduler: Got job 15 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false) 14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 15 (reduce at GradientDescent.scala:150) 14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 15 (MappedRDD[31] at map at GradientDescent.scala:145), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 15 (MappedRDD[31] at map at GradientDescent.scala:145) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 15.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 15.0:0 as TID 28 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 15.0:0 as 2421 bytes in 1 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 15.0:1 as TID 29 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 15.0:1 as 2421 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 28 in 68 ms on slaver01 (progress: 0/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(15, 0) 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 29 in 80 ms on slaver02 (progress: 1/2) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 15.0 from pool 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(15, 1) 14/02/17 21:54:33 INFO DAGScheduler: Stage 15 (reduce at GradientDescent.scala:150) finished in 0.082 s 14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.317842176 s 14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150 14/02/17 21:54:33 INFO DAGScheduler: Got job 16 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false) 14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 16 (reduce at GradientDescent.scala:150) 14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 16 (MappedRDD[33] at map at GradientDescent.scala:145), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 16 (MappedRDD[33] at map at GradientDescent.scala:145) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 16.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 16.0:0 as TID 30 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 16.0:0 as 2420 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 16.0:1 as TID 31 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 16.0:1 as 2420 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 31 in 52 ms on slaver02 (progress: 0/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(16, 1) 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 30 in 60 ms on slaver01 (progress: 1/2) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 16.0 from pool 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(16, 0) 14/02/17 21:54:33 INFO DAGScheduler: Stage 16 (reduce at GradientDescent.scala:150) finished in 0.050 s 14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.071822529 s 14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150 14/02/17 21:54:33 INFO DAGScheduler: Got job 17 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false) 14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 17 (reduce at GradientDescent.scala:150) 14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 17 (MappedRDD[35] at map at GradientDescent.scala:145), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 17 (MappedRDD[35] at map at GradientDescent.scala:145) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 17.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 17.0:0 as TID 32 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 17.0:0 as 2417 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 17.0:1 as TID 33 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 17.0:1 as 2417 bytes in 1 ms 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 33 in 45 ms on slaver02 (progress: 0/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(17, 1) 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 32 in 58 ms on slaver01 (progress: 1/2) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 17.0 from pool 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(17, 0) 14/02/17 21:54:33 INFO DAGScheduler: Stage 17 (reduce at GradientDescent.scala:150) finished in 0.055 s 14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.067749084 s 14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150 14/02/17 21:54:33 INFO DAGScheduler: Got job 18 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false) 14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 18 (reduce at GradientDescent.scala:150) 14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 18 (MappedRDD[37] at map at GradientDescent.scala:145), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 18 (MappedRDD[37] at map at GradientDescent.scala:145) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 18.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 18.0:0 as TID 34 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 18.0:0 as 2419 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 18.0:1 as TID 35 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 18.0:1 as 2419 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 35 in 40 ms on slaver02 (progress: 0/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(18, 1) 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 34 in 97 ms on slaver01 (progress: 1/2) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 18.0 from pool 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(18, 0) 14/02/17 21:54:33 INFO DAGScheduler: Stage 18 (reduce at GradientDescent.scala:150) finished in 0.092 s 14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.105965063 s 14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150 14/02/17 21:54:33 INFO DAGScheduler: Got job 19 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false) 14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 19 (reduce at GradientDescent.scala:150) 14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 19 (MappedRDD[39] at map at GradientDescent.scala:145), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 19 (MappedRDD[39] at map at GradientDescent.scala:145) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 19.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 19.0:0 as TID 36 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 19.0:0 as 2418 bytes in 1 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 19.0:1 as TID 37 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 19.0:1 as 2418 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 36 in 39 ms on slaver01 (progress: 0/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(19, 0) 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 37 in 52 ms on slaver02 (progress: 1/2) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 19.0 from pool 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(19, 1) 14/02/17 21:54:33 INFO DAGScheduler: Stage 19 (reduce at GradientDescent.scala:150) finished in 0.042 s 14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.060941515 s 14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150 14/02/17 21:54:33 INFO DAGScheduler: Got job 20 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false) 14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 20 (reduce at GradientDescent.scala:150) 14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 20 (MappedRDD[41] at map at GradientDescent.scala:145), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 20 (MappedRDD[41] at map at GradientDescent.scala:145) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 20.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 20.0:0 as TID 38 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 20.0:0 as 2418 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 20.0:1 as TID 39 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 20.0:1 as 2418 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 38 in 33 ms on slaver01 (progress: 0/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(20, 0) 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 39 in 71 ms on slaver02 (progress: 1/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(20, 1) 14/02/17 21:54:33 INFO DAGScheduler: Stage 20 (reduce at GradientDescent.scala:150) finished in 0.064 s 14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.080835519 s 14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 20.0 from pool 14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150 14/02/17 21:54:33 INFO DAGScheduler: Got job 21 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false) 14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 21 (reduce at GradientDescent.scala:150) 14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 21 (MappedRDD[43] at map at GradientDescent.scala:145), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 21 (MappedRDD[43] at map at GradientDescent.scala:145) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 21.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 21.0:0 as TID 40 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 21.0:0 as 2422 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 21.0:1 as TID 41 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 21.0:1 as 2422 bytes in 0 ms 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 40 in 40 ms on slaver01 (progress: 0/2) 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(21, 0) 14/02/17 21:54:33 INFO TaskSetManager: Finished TID 41 in 45 ms on slaver02 (progress: 1/2) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 21.0 from pool 14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(21, 1) 14/02/17 21:54:33 INFO DAGScheduler: Stage 21 (reduce at GradientDescent.scala:150) finished in 0.041 s 14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.051875321 s 14/02/17 21:54:33 INFO GradientDescent: GradientDescent finished. Last 10 stochastic losses 0.22493248687186032, 0.2241836511724591, 0.22358630434392676, 0.22309440787976811, 0.22268441631265215, 0.22233909585390685, 0.22204555434717815, 0.2217939816090842, 0.22157679929323662, 0.22138807401560764 14/02/17 21:54:33 INFO LinearRegressionWithSGD: Final model weights 0.6226986501625317,0.26562471165823115,-0.13304380020663167,0.21671917665388107,0.3037175607477254,-0.2007533914066441,0.013953499241049204,0.20270603251011174 14/02/17 21:54:33 INFO LinearRegressionWithSGD: Final model intercept 2.46997566387807 14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at SimpleApp.scala:24 14/02/17 21:54:33 INFO DAGScheduler: Got job 22 (reduce at SimpleApp.scala:24) with 2 output partitions (allowLocal=false) 14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 22 (reduce at SimpleApp.scala:24) 14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 22 (MappedRDD[45] at map at SimpleApp.scala:24), which has no missing parents 14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 22 (MappedRDD[45] at map at SimpleApp.scala:24) 14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 22.0 with 2 tasks 14/02/17 21:54:33 INFO TaskSetManager: Starting task 22.0:0 as TID 42 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 22.0:0 as 2122 bytes in 1 ms 14/02/17 21:54:33 INFO TaskSetManager: Starting task 22.0:1 as TID 43 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:33 INFO TaskSetManager: Serialized task 22.0:1 as 2122 bytes in 0 ms 14/02/17 21:54:34 INFO TaskSetManager: Finished TID 42 in 58 ms on slaver01 (progress: 0/2) 14/02/17 21:54:34 INFO DAGScheduler: Completed ResultTask(22, 0) 14/02/17 21:54:34 INFO TaskSetManager: Finished TID 43 in 60 ms on slaver02 (progress: 1/2) 14/02/17 21:54:34 INFO TaskSchedulerImpl: Remove TaskSet 22.0 from pool 14/02/17 21:54:34 INFO DAGScheduler: Completed ResultTask(22, 1) 14/02/17 21:54:34 INFO DAGScheduler: Stage 22 (reduce at SimpleApp.scala:24) finished in 0.057 s 14/02/17 21:54:34 INFO SparkContext: Job finished: reduce at SimpleApp.scala:24, took 0.081040964 s 14/02/17 21:54:34 INFO SparkContext: Starting job: count at SimpleApp.scala:24 14/02/17 21:54:34 INFO DAGScheduler: Got job 23 (count at SimpleApp.scala:24) with 2 output partitions (allowLocal=false) 14/02/17 21:54:34 INFO DAGScheduler: Final stage: Stage 23 (count at SimpleApp.scala:24) 14/02/17 21:54:34 INFO DAGScheduler: Parents of final stage: List() 14/02/17 21:54:34 INFO DAGScheduler: Missing parents: List() 14/02/17 21:54:34 INFO DAGScheduler: Submitting Stage 23 (MappedRDD[44] at map at SimpleApp.scala:20), which has no missing parents 14/02/17 21:54:34 INFO DAGScheduler: Submitting 2 missing tasks from Stage 23 (MappedRDD[44] at map at SimpleApp.scala:20) 14/02/17 21:54:34 INFO TaskSchedulerImpl: Adding task set 23.0 with 2 tasks 14/02/17 21:54:34 INFO TaskSetManager: Starting task 23.0:0 as TID 44 on executor 2: slaver01 (NODE_LOCAL) 14/02/17 21:54:34 INFO TaskSetManager: Serialized task 23.0:0 as 2011 bytes in 0 ms 14/02/17 21:54:34 INFO TaskSetManager: Starting task 23.0:1 as TID 45 on executor 1: slaver02 (NODE_LOCAL) 14/02/17 21:54:34 INFO TaskSetManager: Serialized task 23.0:1 as 2011 bytes in 0 ms 14/02/17 21:54:34 INFO TaskSetManager: Finished TID 45 in 44 ms on slaver02 (progress: 0/2) 14/02/17 21:54:34 INFO DAGScheduler: Completed ResultTask(23, 1) 14/02/17 21:54:34 INFO TaskSetManager: Finished TID 44 in 51 ms on slaver01 (progress: 1/2) 14/02/17 21:54:34 INFO TaskSchedulerImpl: Remove TaskSet 23.0 from pool 14/02/17 21:54:34 INFO DAGScheduler: Completed ResultTask(23, 0) 14/02/17 21:54:34 INFO DAGScheduler: Stage 23 (count at SimpleApp.scala:24) finished in 0.025 s 14/02/17 21:54:34 INFO SparkContext: Job finished: count at SimpleApp.scala:24, took 0.0633455 s training Mean Squared Error = 0.4424462080486391 14/02/17 21:54:34 INFO ConnectionManager: Selector thread was interrupted! [success] Total time: 41 s, completed Feb 17, 2014 9:54:34 PM [root@master scala]#Spark 的詳細介紹:請點這裡
Spark 的下載地址:請點這裡