shark集群搭建配置 -尊龙游戏旗舰厅官网
一、shark简单介绍
shark是基于spark与hive之上的一种sql查询引擎,尊龙游戏旗舰厅官网的架构图及性能測试图例如以下:(ps:本人也做了一个性能測试见shark性能測试报告)
我们涉及到了2个依赖组件,1是apache spark, 另外一个是amplab的hive0.11.
这里注意版本号的选择。一定要选择官方的推荐版本号:
spark0.91 amplab hive0.11 shark0.91
一定要自己编译好它们,适用于自己的集群。
二、shark集群搭建
1. 搭建spark集群。这个能够參照:spark集群搭建。
2. 编译amplab的hive0.11, 进入到根文件夹下直接 ant package.
3.编译shark,这个步骤和编译spark是一样的。和hdfs的版本号记得兼容即可,改动project以下的sharkbuild.scala里面的hadoop版本号号。然后sbt/sbt assembly.
三、启动spark shark
首先。启动spark,这里要改动spark的配置文件,在spark-env.sh里面配置:
hadoop_conf_dir=/home/hadoop/src/hadoop/conf spark_classpath=/home/hadoop/src/hadoop/lib/:/app/hadoop/shengli/sharklib/* spark_local_dirs=/app/hadoop/shengli/spark/data spark_master_ip=10.1.8.210 spark_master_webui_port=7078接着,配置spark的spark-defaults.conf spark.master spark://10.1.8.210:7077 spark.executor.memory 32g spark.shuffle.spill true java.library.path /usr/local/lib spark.shuffle.consolidatefiles true# spark.eventlog.enabled true # spark.eventlog.dir hdfs://namenode:8021/directory # spark.serializer org.apache.spark.serializer.kryoserializer
最后启动集群,sbin/start-all.sh,至此spark集群配置完成。
shark有依赖的jar包。我们集中将其复制到一个目录内:
#!/bin/bash for jar in `find /home/hadoop/shengli/shark/lib -name '*jar'`; docp $jar /home/hadoop/shengli/sharklib/ done for jar in `find /home/hadoop/shengli/shark/lib_managed/jars -name '*jar'`; docp $jar /home/hadoop/shengli/sharklib/ done for jar in `find /home/hadoop/shengli/shark/lib_managed/bundles -name '*jar'`; docp $jar /home/hadoop/shengli/sharklib/ done配置shark,在shark/conf/shark-env.sh中配置
# format as the jvm's -xmx option, e.g. 300m or 1g. export java_home=/usr/java/jdk1.7.0_25 # (required) set the master program's memory #export shark_master_mem=1g# (optional) specify the location of hive's configuration directory. by default, # shark run scripts will point it to $shark_home/conf #export hive_conf_dir="" export hadoop_home=/home/hadoop/src/hadoop # for running shark in distributed mode, set the following: export shark_master_mem=1g export hadoop_home=$hadoop_home export spark_home=/app/hadoop/shengli/spark export spark_master_ip=10.1.8.210 export master=spark://10.1.8.210:7077# only required if using mesos: #export mesos_native_library=/usr/local/lib/libmesos.so# only required if run shark with spark on yarn #export shark_exec_mode=yarn #export spark_assembly_jar= #export shark_assembly_jar=# (optional) extra classpath #export spark_library_path=""# java options # on ec2, change the local.dir to /mnt/tmp# (optional) tachyon related configuration #export tachyon_master="" # e.g. "localhost:19998" #export tachyon_warehouse_path=/sharktables # could be any valid path name #export hive_home=/home/hadoop/shengli/hive/build/dest export hive_conf_dir=/app/hadoop/shengli/hive/conf export classpath=$classpath:/home/hadoop/src/hadoop/lib:home/hadoop/src/hadoop/lib/native:/app/hadoop/shengli/sharklib/*export scala_home=/app/hadoop/shengli/scala-2.10.3#export spark_library_path=/home/hadoop/src/hadoop/lib/native/linux-amd64-64#export ld_library_path=/home/hadoop/src/hadoop/lib/native/linux-amd64-64#spark conf copy herespark_java_opts=" -dspark.cores.max=8 -dspark.local.dir=/app/hadoop/shengli/spark/data -dspark.deploy.defaultcores=2 -dspark.executor.memory=24g -dspark.shuffle.spill=true -djava.library.path=/usr/local/lib " spark_java_opts ="-xmx4g -xms4g -verbose:gc -xx:-printgcdetails -xx: printgctimestamps -xx: usecompressedoops " export spark_java_opts接下来配置shark的集群了,我们要将编译好的spark,shark。hive所有都分发到各个节点。保持同步更新rsync。
rsync --update -pav --progress /app/hadoop/shengli/spark/ root@10.1.8.211:/app/hadoop/shengli/spark/ ...... rsync --update -pav --progress /app/hadoop/shengli/shark/ root@10.1.8.211:/app/hadoop/shengli/shark/ ...... rsync --update -pav --progress /app/hadoop/shengli/hive/ root@10.1.8.211:/app/hadoop/shengli/hive/ ...... rsync --update -pav --progress /app/hadoop/shengli/sharklib/ root@10.1.8.211:/app/hadoop/shengli/sharklib/ ...... rsync --update -pav --progress /usr/java/jdk1.7.0_25/ root@10.1.8.211:/usr/java/jdk1.7.0_25/ ......启动shark,能够在webui上查看集群状态(上面配置的是web ui port 7078)
进入到shark_home/bin
drwxr-xr-x 4 hadoop games 4.0k jun 12 10:01 . drwxr-xr-x 13 hadoop games 4.0k jun 16 16:59 .. -rwxr-xr-x 1 hadoop games 882 apr 10 19:18 beeline drwxr-xr-x 2 hadoop games 4.0k jun 12 10:01 dev drwxr-xr-x 2 hadoop games 4.0k jun 12 10:01 ext -rwxr-xr-x 1 hadoop games 1.4k apr 10 19:18 shark -rwxr-xr-x 1 hadoop games 730 apr 10 19:18 shark-shell -rwxr-xr-x 1 hadoop games 840 apr 10 19:18 shark-withdebug -rwxr-xr-x 1 hadoop games 838 apr 10 19:18 shark-withinfo这里shark是直接执行shark
shark-shell类似spark-shell
shark-withdebug是在执行中以debug的log4j模式进入,适合排查错误和理解执行。
shark-withinfo同上。
shark还提供了一种shark-server共享application中cacahed rdd概念。
bin/shark -h 10.1.8.210 -p 7100-h 10.1.8.210 -p 7100 starting the shark command line clientlogging initialized using configuration in jar:file:/app/hadoop/shengli/sharklib/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties hive history file=/tmp/root/hive_job_log_root_25876@wh-8-210_201406171640_1172020906.txt slf4j: class path contains multiple slf4j bindings. slf4j: found binding in [jar:file:/app/hadoop/shengli/sharklib/slf4j-log4j12-1.7.2.jar!/org/slf4j/impl/staticloggerbinder.class] slf4j: found binding in [jar:file:/app/hadoop/shengli/sharklib/shark-assembly-0.9.1-hadoop0.20.2-cdh3u5.jar!/org/slf4j/impl/staticloggerbinder.class] slf4j: found binding in [jar:file:/app/hadoop/shengli/shark/lib_managed/jars/org.slf4j/slf4j-log4j12/slf4j-log4j12-1.7.2.jar!/org/slf4j/impl/staticloggerbinder.class] slf4j: see http://www.slf4j.org/codes.html#multiple_bindings for an explanation. slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory] 2.870: [gc 262208k->21869k(1004928k), 0.0274310 secs] [10.1.8.210:7100] shark>这样就能够用多个client连接这个port了。 bin/shark -h 10.1.8.210 -p 7100 -h 10.1.8.210 -p 7100 starting the shark command line clientlogging initialized using configuration in jar:file:/app/hadoop/shengli/sharklib/hive-common-0.11.0-shark-0.9.1.jar!/hive-log4j.properties hive history file=/tmp/hadoop/hive_job_log_hadoop_28486@wh-8-210_201406171719_457245737.txt slf4j: class path contains multiple slf4j bindings. slf4j: found binding in [jar:file:/app/hadoop/shengli/sharklib/slf4j-log4j12-1.7.2.jar!/org/slf4j/impl/staticloggerbinder.class] slf4j: found binding in [jar:file:/app/hadoop/shengli/sharklib/shark-assembly-0.9.1-hadoop0.20.2-cdh3u5.jar!/org/slf4j/impl/staticloggerbinder.class] slf4j: found binding in [jar:file:/app/hadoop/shengli/shark/lib_managed/jars/org.slf4j/slf4j-log4j12/slf4j-log4j12-1.7.2.jar!/org/slf4j/impl/staticloggerbinder.class] slf4j: see http://www.slf4j.org/codes.html#multiple_bindings for an explanation. slf4j: actual binding is of type [org.slf4j.impl.log4jloggerfactory] show ta3.050: [gc 262208k->22324k(1004928k), 0.0240010 secs] ble[10.1.8.210:7100] shark> show tables; time taken (including network latency): 0.072 seconds
至此,shark启动完成。
3、測试
来做一个简单的測试,看是否可用,处理一个21g的文件。
[hadoop@wh-8-210 shark]$ hadoop dfs -ls /user/hive/warehouse/log/ found 1 items -rw-r--r-- 3 hadoop supergroup 22499035249 2014-06-16 18:32 /user/hive/warehouse/log/21gfilecreate table log (c1 string,c2 string,c3 string,c4 string,c5 string,c6 string,c7 string,c8 string,c9 string,c10 string,c11 string,c12 string,c13 string ) row format delimited fields terminated by '\t' stored as textfile;
load data inpath '/user/hive/warehouse/log/21gfile' into table log;
count一下log表: [10.1.8.210:7100] shark> select count(1) from log > ; 171802086 time taken (including network latency): 33.753 seconds用时33秒。
将log表所有装在至内存,count一下log_cached:
create table log_cached tblproperties ("shark.cache" = "true") as select * from log; time taken (including network latency): 481.96 secondsshark> select count(1) from log_cached; 171802086 time taken (including network latency): 6.051 seconds用时6秒,速度提升了至少5倍。
查看executor以及task存储状况:
查看存储状况storage:
至此,shark集群搭建和简单的測试已完毕。
兴许我会写篇环境搭建中常见的问题,以及更具体的shark測试结论。
注: 原创文章。转载请注明出处。出自:http://blog.csdn.net/oopsoom/article/details/30513929
-eof-
总结
以上是尊龙游戏旗舰厅官网为你收集整理的shark集群搭建配置的全部内容,希望文章能够帮你解决所遇到的问题。
- 上一篇: 免费的html5连载来了《html5网页
- 下一篇: