docker-elk-ssl/README.md

112 lines
3.7 KiB
Markdown
Raw Normal View History

2015-02-27 19:03:06 +01:00
# Docker ELK stack
2014-12-15 13:59:41 +01:00
2015-02-21 08:52:34 +01:00
[![Join the chat at https://gitter.im/deviantony/fig-elk](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/deviantony/fig-elk?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
2015-02-27 19:03:06 +01:00
Run the ELK (Elasticseach, Logstash, Kibana) stack with Docker and Docker-compose.
2014-11-18 21:08:33 +01:00
2015-02-14 09:02:31 +01:00
It will give you the ability to quickly test your logstash filters and check how the data can be processed in Kibana.
Based on the official images:
2014-11-19 19:04:36 +01:00
* [elasticsearch](https://registry.hub.docker.com/_/elasticsearch/)
* [logstash](https://registry.hub.docker.com/_/logstash/)
* [kibana](https://registry.hub.docker.com/_/kibana/)
2014-11-19 19:04:36 +01:00
# Requirements
2015-04-24 08:30:14 +02:00
## Setup
2014-11-18 21:08:33 +01:00
1. Install [Docker](http://docker.io).
2015-02-27 19:03:06 +01:00
2. Install [Docker-compose](http://docs.docker.com/compose/install/).
2014-11-19 17:43:45 +01:00
3. Clone this repository
## SELinux
2015-04-24 08:30:14 +02:00
On distributions which have SELinux enabled out-of-the-box you will need to either re-context the files or set SELinux into Permissive mode in order for fig-elk to start properly.
2015-04-24 08:30:14 +02:00
For example on Redhat and CentOS, the following will apply the proper context:
````bash
.-root@centos ~
-$ chcon -R system_u:object_r:admin_home_t:s0 fig-elk/
````
2015-04-24 08:30:14 +02:00
# Usage
2015-04-24 08:30:14 +02:00
Start the ELK stack using *docker-compose*:
2015-04-24 08:30:14 +02:00
```bash
2015-04-24 08:30:14 +02:00
$ docker-compose up
```
You can also choose to run it in background (detached mode):
```bash
2015-04-24 08:30:14 +02:00
$ docker-compose up -d
```
Now that the stack is running, you'll want to inject logs in it. The shipped logstash configuration allows you to send content via tcp:
```bash
2015-04-24 08:30:14 +02:00
$ nc localhost 5000 < /path/to/logfile.log
```
And then access Kibana UI by hitting [http://localhost:5601](http://localhost:5601) with a web browser.
2015-04-24 08:30:14 +02:00
By default, the stack exposes the following ports:
2014-11-18 21:08:33 +01:00
* 5000: Logstash TCP input.
2015-04-24 08:30:14 +02:00
* 9200: Elasticsearch HTTP (with Marvel plugin accessible via [http://localhost:9200/_plugin/marvel](http://localhost:9200/_plugin/marvel))
* 5601: Kibana 4 web interface
*WARNING*: If you're using *boot2docker*, you must access it via the *boot2docker* IP address instead of *localhost*.
2015-09-05 06:01:11 +02:00
*WARNING*: If you're using *Docker Toolbox*, you must access it via the *docker-machine* IP address instead of *localhost*.
# Configuration
*NOTE*: Configuration is not dynamically reloaded, you will need to restart the stack after any change in the configuration of a component.
## How can I tune Kibana configuration?
2015-04-24 08:30:14 +02:00
The Kibana default configuration is stored in `kibana/config/kibana.yml`.
2015-04-24 08:30:14 +02:00
## How can I tune Logstash configuration?
The logstash configuration is stored in `logstash/config/logstash.conf`.
The folder `logstash/config` is mapped onto the container `/etc/logstash/conf.d` so you
can create more than one file in that folder if you'd like to. However, you must be aware that config files will be read from the directory in alphabetical order.
## How can I tune Elasticsearch configuration?
The Elasticsearch container is using the shipped configuration and it is not exposed by default.
If you want to override the default configuration, create a file `elasticsearch/config/elasticsearch.yml` and add your configuration in it.
Then, you'll need to map your configuration file inside the container in the `docker-compose.yml`. Update the elasticsearch container declaration to:
```yml
elasticsearch:
build: elasticsearch/
ports:
- "9200:9200"
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
```
# Storage
## How can I store Elasticsearch data?
In order to persist Elasticsearch data, you'll have to mount a volume on your Docker host. Update the elasticsearch container declaration to:
```yml
elasticsearch:
build: elasticsearch/
ports:
- "9200:9200"
volumes:
- /path/to/storage:/usr/share/elasticsearch/data
```
2015-04-24 08:30:14 +02:00
This will store elasticsearch data inside `/path/to/storage`.