软件版本

名称 版本
os CentOS release 6.5 (Final)
ES elasticsearch-5.2.0

集群节点

主机 ip
study0 192.168.137.150
study1 192.168.137.151

部署elasticsearch集群:

安装jdk

[root@study0 elasticsearch-5.2.0]# java -version
java version "1.8.0_141"
Java(TM) SE Runtime Environment (build 1.8.0_141-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.141-b15, mixed mode)
[root@study0 elasticsearch-5.2.0]# 

解压安装

[root@study0 es]# unzip elasticsearch-5.2.0.zip

具体配置

cd config/
vim elasticsearch.yml


cluster.name: es-log
node.name: log-0
#node.attr.rack: r1
path.data: /usr/local/es/elasticsearch-5.2.0/data

path.logs: /usr/local/es/elasticsearch-5.2.0/logs

bootstrap.memory_lock: false
bootstrap.system_call_filter: false

network.host: study0

http.port: 9200

discovery.zen.ping.unicast.hosts: ["study0", "study1"]

discovery.zen.minimum_master_nodes: 1

#gateway.recover_after_nodes: 3

#action.destructive_requires_name: true

#action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*
xpack.security.enabled: false
xpack.notification.email.account: 
  work: 
    profile: standard
    email_defaults:
      from: dennis52o1314@163.com
    smtp: 
      auth: true
      host: smtp.163.com
      port: 25
      user: dennis52o1314@163.com
      password: 1111111111

http.cors.enabled: true
http.cors.allow-origin: "*"

配置解释

集群名称

cluster.name: es-log

节点

node.name: log-1 ##节点保持唯一性。

数据和日志存放目录

path.data: /path/to/data
path.logs: /path/to/logs     ##默认当前目录下,可以修改

是否使用swap

bootstrap.memory_lock:true
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

广播配置

discovery.zen.ping.unicast.hosts: ["study0", "study1"]
discovery.zen.minimum_master_nodes: 1

其他系统设置

vim /etc/sysctl.conf
vm.max_map_count= 262144
sysctl –p

系统打开文件数配置

vim /etc/security/limits.conf

*  hard nofile 65536
*  soft nofile 65536

修改用户线程数

vim /etc/security/limits.d/90-nproc.conf
* soft nproc     2048

切换普通用户启动

su – study
./elasticsearch

要是提示,报一些没有目录的错误,直接创建即可,但是想写入日志和数据必须是普通用户有写入权限

测试安装效果

curl  -XGET 'study0:9200'
{
  "name" : "log-1",
  "cluster_name" : "es-log",
  "cluster_uuid" : "_na_",
  "version" : {
    "number" : "5.0.0",
    "build_hash" : "253032b",
    "build_date" : "2016-10-26T04:37:51.531Z",
    "build_snapshot" : false,
    "lucene_version" : "6.2.0"
  },
  "tagline" : "You Know, for Search"
}

集群启动设置

只加入一个节点,那么es就当做自己是一个集群。
一个节点(node)就是一个Elasticsearch实例,而一个集群(cluster)由一个或多个节点组成,它们具有相同的cluster.name,它们协同工作,分享数据和负载。
当加入新的节点或者删除一个节点时,集群就会感知到并平衡数据。

study0创建一条索引,查看单集群的状态

[root@study0 elasticsearch-5.2.0]# curl -XPOST 'study0:9200/test/name/1' -d '
{
  "name": "dennis"
}'

####创建一条test的索引,type为name,id=1

查看集群当前的状态:

[root@study0 elasticsearch-5.2.0]#curl  -XGET 'study0:9200/_cluster/health?pretty'
{
  "cluster_name" : "es-log",
  "status" : "yellow",
}

集群的几个状态:

颜色 说明
green 所有主要分片和复制分片都可用
yellow 所有主要分片可用,但不是所有复制分片都可用
red 不是所有的主要分片都可用

可以看到显示为:yellow的,因为只有主分片,而没有复制分片的。

现在启动配置好study1这个节点:

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: es-log
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: log-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /usr/local/es/elasticsearch-5.2.0/data
#
# Path to log files:
#
path.logs: /usr/local/es/elasticsearch-5.2.0/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: study1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["study0", "study1"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

集群的相关状态:

[root@study0 elasticsearch-5.2.0]#  curl  -XGET 'study0:9200/_cluster/health?pretty'
{
  "cluster_name" : "es-log",
  "status" : "green",
}

可以看到集群已经变成绿色,说明我们复制分片是已经可以使用了的,我们在study1上面查看一下我们刚刚创建的数据看:

[root@study1 elasticsearch-5.2.0]#  curl -XGET 'study1:9200/test/name/1?pretty'
{
  "_index" : "test",
  "_type" : "name",
  "_id" : "1",
  "_version" : 1,
  "found" : true,
  "_source" : {
    "name" : "dennis"
  }

}

可以看到数据已经被复制到study1上面了。

成功之后,集群的配置要稍作修改:

discovery.zen.minimum_master_nodes: 2   ###开始设置成1是为了一台测试,推荐大于1台。