Archive

Posts Tagged ‘kibana’

Courier Fetch Error: unhandled courier request error: Authorization Exception in Chrome/Safari on Kibana 4.5.0

August 22nd, 2016 No comments

Getting this error in your Kibana?

You need to increase your max header size as default netty is only 8KB.   You can change the value in your elasticsearch.yml file.

Add the following line (or uncomment it if it is already there).

http.max_header_size: 32kb

 

Fixing ‘plugin:elasticsearch [document_already_exists_exception] [config][4.5.1]: document already exists’

June 11th, 2016 No comments

Substitute in the version ‘4.5.1’ with the version you are upgrading to. So far I’ve seen it since Kibana 4.1.x to 4.5.1.

It seem that if you upgrade Kibana, there is a timing bug in how Kibana note its current version. You will get lots of these errors in Kibana logs:

log [08:08:30.649] [error][status][plugin:elasticsearch] Status changed from green to red - [document_already_exists_exception] [config][4.5.1]: document already exists, with: {"shard":"0","index":".kibana"}

These came from me upgrading version 4.5.0 to 4.5.1. I’ve seen same thing when I went from 4.1.4 to 4.5.0.

The fix is to delete the config record in your .kibana index. Don’t worry, it gets recreated again. No loss as far as I know.

curl -XDELETE elasticsearchserver:9200/.kibana/config/4.5.1

The Kibana bug is documented here: kibana issues #5519.

If deleting record does not work, you will also need to refresh your kibana index, e.g. this will flush the data!!!!

curl -XPOST elasticsearchserver:9200/.kibana/_refresh

Categories: Elasticsearch, ELK, Tech Tags: ,

Kibana 4 with tribe node MasterNotDiscoveredException

December 19th, 2015 No comments

I use tribe nodes quite a lot at $work. It’s how we federate disparate ELK clusters and able to search across them. There are many reasons to have distinct ELK clusters in each data center and/or region.

Some of these are:

1. Elasticsearch does not work well when there is network latencies, which is guaranteed when your nodes are located geographically distant places. You could spend a lot of money to get fast network connection, or you can just have only local clusters. (Me? I pick saving money and avoiding head aches :-)).

2. It can get insanely expensive to create an ES cluster that span data centers/regions. The network bandwidth requirement, the data charges, the care and feeding of such a latency sensitive cluster…. OMG!

3. I don’t really think a 3rd reason is needed.

Although tribe nodes are great for federating ES clusters, there are some quirks in setting them up and caring for them (not as bad as ES clusters that span datacenter though).

One big gotcha for many people who are setting up tribe nodes for the first time is that tribe node can not create index. Tribe can only update, modify an existing index. What this mean is that if you point Kibana at a tribe node, you must first make sure you Kibana index is already created in one of the downstream ES cluster. Otherwise, you will have to create it yourself.

Otherwise, the first time you create an index pattern and tried to save it, you will get an error similar to the subject of this post.

MasterNotDiscoveredException

The error message is wrong and misleading. It has nothing to do with Master node. It has everything to do with tribe node not able to create (PUT) a Kibana index.

Personally, I prefer to make the Kibana index that I use with tribe to have its own unique name. So I run a dedicated Kibana instance pointing to the dedicated tribe (client) node.

Here are the steps I do to get a tribe node and its associated Kibana ready for use.

1. Configure the tribe node to know all the ES clusters I want to federate data from.

tribe.elasticsearch.yml:

cluster.name: toplevel_tribe
node.name: ${HOSTNAME}
node.master: false
node.data: false
tribe:
  DC1-appservice:
     cluster.name: logging-DC1
     network.host: 0.0.0.0
     network.publish_host: ${HOSTNAME}
     discovery.zen.ping.unicast.hosts:
      - dc1-app13225.prod.example.com
      - dc1-app13226.prod.example.com
      - dc1-app13227.prod.example.com
  DC2-appservice:
     cluster.name: logging-DC2
     network.host: 0.0.0.0
     network.publish_host: ${HOSTNAME}
     discovery.zen.ping.unicast.hosts:
      - dc2-app12281.prod.example.com
      - dc2-app12282.prod.example.com
      - dc2-app12283.prod.example.com
   DC3.....etc to DCNN

  my-es-dedicated-config-cluster:
     cluster.name: es-config-CORP
     network.host: 0.0.0.0
     network.publish_host: ${HOSTNAME}
     discovery.zen.ping.unicast.hosts:
      - corp-app1234.example.com

 on_conflict: prefer_my-es-dedicated-config-cluster

2. Now pre-create the Kibana index in my-ES-dedicated-config-cluster. This is a small cluster in my admin/corp data center that is only for housing configurations, Kibana dashboards, etc.

3. A simpler and more correct way is to temporary point Kibana to the dedicated ES cluster (instead of the tribe).

Do this via this setting in your kibana.yml file:

# The Elasticsearch instance to use for all your queries.
elasticsearch.url: “http://ES-node:9200”

Start Kibana, let it create the index.  Then stop it, change the setting back to point to your tribe node.

Doing it this way ensure that your kibana is correct.

curl command for pre-creating kibana (3 and 4) index:


curl -s -XPUT "http://localhost:9200/kibana3-int/" -d '{ "settings" : { "number_of_shards" : 3, "number_of_replicas" : 2 },
"mappings" : { "temp" : { "properties" : { "dashboard" : { "type" : "string" }, "group" : { "type" : "string" }, "title" : { "type" : "string" }, "user" : { "type" : "string" } } }, "dashboard" : { "properties" : { "dashboard" : { "type" : "string" }, "group" : { "type" : "string" }, "title" : { "type" : "string" }, "user" : { "type" : "string" } } } }'


# Kibana4
curl -s -XPUT "http://localhost:9200/TRIBENAME-kibana4" -d '{ "index.mapper.dynamic" : true, "settings" : { "number_of_shards" : 1, "number_of_replicas" : 0 },"mappings" : {"search" : {"_timestamp" : { },"properties" : {"columns" : {"type" : "string"},"description" : {"type" : "string"},"hits" : {"type" : "long"},"kibanaSavedObjectMeta" : {"properties" : {"searchSourceJSON" : {"type" : "string"}}},"sort" : {"type" : "string"},"title" : {"type" : "string"},"version" : {"type" : "long"}}},"dashboard" : {"_timestamp" : { },"properties" : {"description" : {"type" : "string"},"hits" : {"type" : "long"},"kibanaSavedObjectMeta" : {"properties" : {"searchSourceJSON" : {"type" : "string"}}},"optionsJSON" : {"type" : "string"},"panelsJSON" : {"type" : "string"},"timeRestore" : {"type" : "boolean"},"title" : {"type" : "string"},"uiStateJSON" : {"type" : "string"},"version" : {"type" : "long"}}},"visualization" : {"_timestamp" : { },"properties" : {"description" : {"type" : "string"},"kibanaSavedObjectMeta" : {"properties" : {"searchSourceJSON" : {"type" : "string"}}},"savedSearchId" : {"type" : "string"},"title" : {"type" : "string"},"uiStateJSON" : {"type" : "string"},"version" : {"type" : "long"},"visState" : {"type" : "string"}}},"config" : {"_timestamp" : { },"properties" : {"buildNum" : {"type" : "long"},"defaultIndex" : {"type" : "string"}}},"index-pattern" : {"_timestamp" : { },"properties" : {"customFormats" : {"type" : "string"},"fieldFormatMap" : {"type" : "string"},"fields" : {"type" : "string"},"intervalName" : {"type" : "string"},"timeFieldName" : {"type" : "string"},"title" : {"type" : "string"}}}}}'

ELK Operational Tips

February 22nd, 2015 No comments

I’ve been running ELK clusters for over a year now, and want to share tips and tricks that I’ve found to be useful.

Feel free to post questions and corrections. I’ll try to answer and update when possible.

Elasticsearch

  • Split brained – this is when you have more than one node in your cluster becoming master.
    • It is best to avoid ever having this happen.   Use the rule of thumb, e.g. if you have N nodes, the number of nodes that can be master is N/2 + 1.   Even better, set aside a dedicated pool of master nodes (I recommend minimum of 3 master capable nodes).
    • If split brained does happen, you want to stop one of the master node ASAP.   Depending on whether you have replicas or not, it could be easy fix, or you might end up having to re-index if your indices has gotten out of sync by having the replica promoted to primary and new index data sent to it.
  • Failed node(s) – one or more failed nodes.  There are many scenarios, from failing hardware to outages causing data corruption, etc.
  • Planned maintenance – several scenarios.
  • Indexing take too long.
  • Recovery take too long.
  • Search/query take too long.

Logstash

Kibana

 

Elasticsearch, Logstash and Kibana Meetup @ LinkedIn

May 23rd, 2014 No comments

We had a great ELK Meetup on Wed 5/21/2014 at LinkedIn.  The recorded video is available here.

http://www.ustream.tv/recorded/47864947

We had Kurt Hurtado, one of the logstash dev, speaking on ELK in the DevOps Environment.  Then a nice long Q&A session after, joined by Uri Boness, one of the Elasticsearch core dev.

 

Categories: NoSQL, Search, Tech Tags: , , ,

Elasticsearch, Logstash, Kibana (ELK) group on LinkedIn

April 29th, 2014 No comments

I started a group for ELK on LinkedIn.  The direct link is

https://www.linkedin.com/groups?home=&gid=6686144&trk=anet_ug_hm

I have also started a Meetup for the Mountain View area.

Mountain View ELK Meetup