2012-03-18 3 views
1

나는 ec2에서 elasticsearch를 설정하는 데이 tutorial을 따르고 있으며 S3 게이트웨이를 사용하려면 ES를 얻을 수 없습니다. 튜토리얼에서 말하는대로 [gateway.s3]을 출력해야하지만 필자는 그렇지 않습니다.elasticsearch v0.19.0에서 S3 게이트웨이 설정 문제

자습서는 조금 오래되어 김치가 elasticsearch-cloud-aws을 사용하라고 말했습니다. 그래서 bin/plugin -install elasticsearch/elasticsearch-cloud-aws/1.4.0과 함께 cloud-aws를 설치했습니다. 성공적으로 S3에 연결 한 후이 설치를 제안하는

나는 또한 cloud.aws.region과 연주했지만 도움이되지 않았습니다.

그래서 S3 게이트웨이를 작동 시키려면 어떻게해야합니까? 이것이 새로운 0.19.0 버전의 문제일까요? 여기

제가

v0.19.0
wget https://github.com/downloads/elasticsearch/elasticsearch/elasticsearch-0.19.0.zip 
sudo unzip elasticsearch-0.19.0.zip -d /usr/local/elasticsearch 
cd /usr/local/elasticsearch/elasticsearch-0.19.0 
sudo bin/plugin -install elasticsearch/elasticsearch-cloud-aws/1.4.0 
sudo vim config/elasticsearch.yml 
sudo vim config/logging.yml 
sudo vim config/elasticsearch.yml 
ES_MIN_MEM=400mb 
ES_MAX_MEM=400mb 
echo $ES_MAX_MEM 
sudo bin/elasticsearch -f 
sudo vim config/elasticsearch.yml 
... snip 
sudo bin/plugin -install Aconex/elasticsearch-head 
sudo bin/elasticsearch -f 
sudo vim config/elasticsearch.yml 
... snip 
sudo bin/plugin -install elasticsearch/elasticsearch-cloud-aws 
sudo bin/elasticsearch -f 

elasticsearch.yml

cluster.name: elasticsearch-demo-js 
cloud: 
     aws: 
       access_key: KEY 
       secret_key: SECRET_KEY 
       region: us-east 
     discovery: 
       type: ec2 
     gateway: 
       type: s3 
       s3: 
         bucket: es-demo-js 
gateway.recover_after_nodes: 1 
gateway.recover_after_time: 1m 
gateway.expected_nodes: 2 

logging.yml

rootLogger: INFO, console, file 
logger: 
    # log action execution errors for easier debugging 
    action: DEBUG 
    # reduce the logging for aws, too much is logged under the default INFO 
    com.amazonaws: WARN 

    # gateway 
    gateway: DEBUG 
    #index.gateway: DEBUG 

    # peer shard recovery 
    #indices.recovery: DEBUG 

    # discovery 
    discovery: TRACE 

    org.apache: WARN 

    index.search.slowlog: TRACE, index_search_slow_log_file 

additivity: 
    index.search.slowlog: false 

appender: 
    console: 
    type: console 
    layout: 
     type: consolePattern 
     conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 

    file: 
    type: dailyRollingFile 
    file: ${path.logs}/${cluster.name}.log 
    datePattern: "'.'yyyy-MM-dd" 
    layout: 
     type: pattern 
     conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 

    index_search_slow_log_file: 
    type: dailyRollingFile 
    file: ${path.logs}/${cluster.name}_index_search_slowlog.log 
    datePattern: "'.'yyyy-MM-dd" 
    layout: 
     type: pattern 
     conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 

OUTPUT

[email protected] elasticsearch-0.19.0]$ sudo bin/elasticsearch -f 
    [2012-03-18 16:36:10,786][WARN ][bootstrap    ] jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line 
    [2012-03-18 16:36:10,791][INFO ][node      ] [Roma] {0.19.0}[20285]: initializing ... 
    [2012-03-18 16:36:10,804][INFO ][plugins     ] [Roma] loaded [cloud-aws], sites [head] 
    [2012-03-18 16:36:11,672][DEBUG][discovery.zen.ping.multicast] [Roma] using group [224.2.2.4], with port [54328], ttl [3], and address [null] 
    [2012-03-18 16:36:11,675][DEBUG][discovery.zen.ping.unicast] [Roma] using initial hosts [], with concurrent_connects [10] 
    [2012-03-18 16:36:11,676][DEBUG][discovery.zen   ] [Roma] using ping.timeout [3s] 
    [2012-03-18 16:36:11,682][DEBUG][discovery.zen.elect  ] [Roma] using minimum_master_nodes [-1] 
    [2012-03-18 16:36:11,683][DEBUG][discovery.zen.fd   ] [Roma] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3] 
    [2012-03-18 16:36:11,685][DEBUG][discovery.zen.fd   ] [Roma] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3] 
    [2012-03-18 16:36:12,400][DEBUG][gateway.local   ] [Roma] using initial_shards [quorum], list_timeout [30s] 
    [2012-03-18 16:36:12,589][DEBUG][gateway.local.state.shards] [Roma] took 51ms to load started shards state 
    [2012-03-18 16:36:12,639][DEBUG][gateway.local.state.meta ] [Roma] took 49ms to load state 
    [2012-03-18 16:36:12,642][INFO ][node      ] [Roma] {0.19.0}[20285]: initialized 
    [2012-03-18 16:36:12,642][INFO ][node      ] [Roma] {0.19.0}[20285]: starting ... 
    [2012-03-18 16:36:12,703][INFO ][transport    ] [Roma] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.127.162.192:9300]} 
    [2012-03-18 16:36:12,719][TRACE][discovery    ] [Roma] waiting for 30s for the initial state to be set by the discovery 
    [2012-03-18 16:36:12,722][TRACE][discovery.zen.ping.multicast] [Roma] [4] sending ping request 
    [2012-03-18 16:36:14,224][TRACE][discovery.zen.ping.multicast] [Roma] [5] sending ping request 
    [2012-03-18 16:36:15,726][DEBUG][discovery.zen   ] [Roma] ping responses: {none} 
    [2012-03-18 16:36:15,729][INFO ][cluster.service   ] [Roma] new_master [Roma][KYVDhYLmSY-u8j4jVD7FaQ][inet[/10.127.162.192:9300]], reason: zen-disco-join (elected_as_master) 
    [2012-03-18 16:36:15,762][TRACE][discovery    ] [Roma] initial state set from discovery 
    [2012-03-18 16:36:15,762][INFO ][discovery    ] [Roma] elasticsearch-demo-js/KYVDhYLmSY-u8j4jVD7FaQ 
    [2012-03-18 16:36:15,763][DEBUG][gateway     ] [Roma] delaying initial state recovery for [1m] 
    [2012-03-18 16:36:15,766][INFO ][http      ] [Roma] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.127.162.192:9200]} 
    [2012-03-18 16:36:15,766][INFO ][node      ] [Roma] {0.19.0}[20285]: started 
    [2012-03-18 16:37:15,778][DEBUG][gateway.local   ] [Roma] [twitter][0]: allocating [[twitter][0], node[null], [P], s[UNASSIGNED]] to [[Roma][KYVDhYLmSY-u8j4jVD7FaQ][inet[/10.127.162.192:9300]]] on primary allocation 
    [2012-03-18 16:37:15,778][DEBUG][gateway.local   ] [Roma] [twitter][6]: allocating [[twitter][7], node[null], [P], s[UNASSIGNED]] to [[Roma][KYVDhYLmSY-u8j4jVD7FaQ][inet[/10.127.162.192:9300]]] on primary allocation 
    [2012-03-18 16:37:15,778][DEBUG][gateway.local   ] [Roma] [twitter][8]: allocating [[twitter][9], node[null], [P], s[UNASSIGNED]] to [[Roma][KYVDhYLmSY-u8j4jVD7FaQ][inet[/10.127.162.192:9300]]] on primary allocation 
    [2012-03-18 16:37:15,779][DEBUG][gateway.local   ] [Roma] [twitter][10]: allocating [[twitter][11], node[null], [P], s[UNASSIGNED]] to [[Roma][KYVDhYLmSY-u8j4jVD7FaQ][inet[/10.127.162.192:9300]]] on primary allocation 
    [2012-03-18 16:37:15,779][DEBUG][gateway.local   ] [Roma] [twitter][12]: throttling allocation [[twitter][13], node[null], [P], s[UNASSIGNED]] to [[[Roma][KYVDhYLmSY-u8j4jVD7FaQ][inet[/10.127.162.192:9300]]]] on primary allocation 
    [2012-03-18 16:37:16,050][DEBUG][index.gateway   ] [Roma] [twitter][0] starting recovery from local ... 
    [2012-03-18 16:37:16,138][DEBUG][index.gateway   ] [Roma] [twitter][14] starting recovery from local ... 
    [2012-03-18 16:37:16,150][DEBUG][index.gateway   ] [Roma] [twitter][0] recovery completed from local, took [100ms] 
    index : files   [3] with total_size [86b], took[11ms] 
      : recovered_files [0] with total_size [0b] 
      : reusing_files [3] with total_size [86b] 
    start : took [87ms], check_index [0s] 
    translog : number_of_operations [0], took [2ms] 
[2012-03-18 16:37:16,156][DEBUG][index.gateway   ] [Roma] [twitter][1] recovery completed from local, took [18ms] 
    index : files   [3] with total_size [86b], took[0s] 
      : recovered_files [0] with total_size [0b] 
      : reusing_files [3] with total_size [86b] 
    start : took [17ms], check_index [0s] 
    translog : number_of_operations [0], took [1ms] 
[2012-03-18 16:37:16,179][DEBUG][index.gateway   ] [Roma] [twitter][2] starting recovery from local ... 
[2012-03-18 16:37:16,228][DEBUG][index.gateway   ] [Roma] [twitter][4] starting recovery from local ... 
[2012-03-18 16:37:16,238][INFO ][gateway     ] [Roma] recovered [1] indices into cluster_state 
[2012-03-18 16:37:16,239][DEBUG][gateway.local   ] [Roma] [twitter][3]: allocating [[twitter][3], node[null], [P], s[UNASSIGNED]] to [[Roma][KYVDhYLmSY-u8j4jVD7FaQ][inet[/10.127.162.192:9300]]] on primary allocation 
[2012-03-18 16:37:16,243][DEBUG][index.gateway   ] [Roma] [twitter][4] recovery completed from local, took [15ms] 
    index : files   [3] with total_size [86b], took[0s] 
      : recovered_files [0] with total_size [0b] 
      : reusing_files [3] with total_size [86b] 
    start : took [2ms], check_index [0s] 
    translog : number_of_operations [0], took [13ms] 
[2012-03-18 16:37:16,308][DEBUG][index.gateway   ] [Roma] [twitter][3] starting recovery from local ... 
[2012-03-18 16:37:16,312][DEBUG][index.gateway   ] [Roma] [twitter][3] recovery completed from local, took [4ms] 
    index : files   [3] with total_size [86b], took[1ms] 
      : recovered_files [0] with total_size [0b] 
      : reusing_files [3] with total_size [86b] 
    start : took [2ms], check_index [0s] 
    translog : number_of_operations [0], took [1ms] 
[2012-03-18 16:37:16,545][DEBUG][index.gateway   ] [Roma] [twitter][2] recovery completed from local, took [366ms] 
    index : files   [11] with total_size [1.2kb], took[2ms] 
      : recovered_files [0] with total_size [0b] 
      : reusing_files [11] with total_size [1.2kb] 
    start : took [50ms], check_index [0s] 
    translog : number_of_operations [1], took [314ms] 
와 마이크로 인스턴스를 설정하는 방법이며
+0

YAML가 제대로 포맷되어 있는지 확인하여 결국 무엇

, 당신의 들여 쓰기 버킷 주위에 약간 펑키 보인다. 전체 네임 스페이스를 사용하는 것이 더 안전 할 수 있습니다 (예 : 예 : https://github.com/karmi/cookbook-elasticsearch/blob/master/templates/default/elasticsearch.yml.erb#L38 – karmi

+0

그래, 기본 vim 설정은 추악한 탭을 제공합니다. 그 링크를 보내 주셔서 감사합니다. 나는 당신의 파일을보고 있었고 나는 cluster.aws 아래에 게이트웨이가 중첩되어 있다는 것을 깨달았습니다. 고맙습니다. – jspooner

답변

1

클러스터, 클라우드, 검색 및 게이트웨이는 모두 최상위 개체입니다. 내 원래 설정에서 그들은 구름 아래에 중첩되었다. 또한 bernie가 전체 네임 스페이스를 사용하여 YAML을 단순화했다고 제안했습니다. 이것은 내가

cluster.name: elasticsearch-demo-js 
cloud.aws.access_key: 
cloud.aws.secret_key: 
cloud.aws.discovery.type: ec2 
gateway.type: s3 
gateway.s3.bucket: es-demo 
gateway.recover_after_nodes: 2 
gateway.recover_after_time: 1m 
gateway.expected_nodes 2 
+0

+1 - 솔루션을 이용해 주셔서 감사합니다. –

+0

오직 부모님이 나를 "베르니"라고 부릅니다. :) – karmi