patternMinor
Data sent from Logstash via elastic output plugin not showing in Kibana, but file output plugin works fine - what am I doing wrong?
Viewed 0 times
finefileshowingwhatbutdoingoutputsentelasticplugin
Problem
I have an "ELK stack" configuration and, at first, was doing the standard 'filebeat' syslog feeding from logstash with the elasticsearch output plugin. It worked just fine.
Now I have added a TCP input port (with assigned "type" for this data so as to do if [type] == "thistype" to differentiate in filters), its own grok filter, and output to both elasticsearch with its own unique index name and document_type name and file. When the data comes in over the TCP port it writes the properly formatted data to the output file as expected in the file output plugin but no data is showing up in Kibana when I choose the given index. Kibana recognizes the index from my output configuration and also lists all the fields/keys I assign in the grok filter; however, again, no data is searchable. But it sees the property names (and even adds the propertyName."keyword" property itself on them!) The data is definitely being grok'd properly as it is, as I mentioned, appearing in the file plugin output.
What am I doing wrong here? My configuration is as follows:
```
input {
tcp {
host => "10.1.1.10"
port => 12345
type => "odata"
id => "odata"
codec => line
}
}
filter {
if [type] == "odata" {
grok {
match => { "message" => "%{QUOTEDSTRING:oid},\"%{WORD:oword1}\",\"%{IPV4:oclientip}\",\"%{DATA:otimestamp}\",%{QUOTEDSTRING:opl},%{QUOTEDSTRING:oos},%{QUOTEDSTRING:oua}" }
remove_field => "message"
}
date {
match => ["otimestamp", "YYYY-MM-dd HH:mm:ss Z"]
}
mutate {
remove_field => "otimestamp"
remove_field => "host"
remove_field => "@version"
}
}
}
output {
# the if .. is here because there are other types that are handled in this output since I centralize the input, filter, and output files to three distinct files.
if [type] == "odata" {
elasticsearch {
hosts => ["10.1.1.1:9200", "10.1.1.2:9200"]
sniffing => false
index => "odataindex"
document_type => "odatatype"
Now I have added a TCP input port (with assigned "type" for this data so as to do if [type] == "thistype" to differentiate in filters), its own grok filter, and output to both elasticsearch with its own unique index name and document_type name and file. When the data comes in over the TCP port it writes the properly formatted data to the output file as expected in the file output plugin but no data is showing up in Kibana when I choose the given index. Kibana recognizes the index from my output configuration and also lists all the fields/keys I assign in the grok filter; however, again, no data is searchable. But it sees the property names (and even adds the propertyName."keyword" property itself on them!) The data is definitely being grok'd properly as it is, as I mentioned, appearing in the file plugin output.
What am I doing wrong here? My configuration is as follows:
```
input {
tcp {
host => "10.1.1.10"
port => 12345
type => "odata"
id => "odata"
codec => line
}
}
filter {
if [type] == "odata" {
grok {
match => { "message" => "%{QUOTEDSTRING:oid},\"%{WORD:oword1}\",\"%{IPV4:oclientip}\",\"%{DATA:otimestamp}\",%{QUOTEDSTRING:opl},%{QUOTEDSTRING:oos},%{QUOTEDSTRING:oua}" }
remove_field => "message"
}
date {
match => ["otimestamp", "YYYY-MM-dd HH:mm:ss Z"]
}
mutate {
remove_field => "otimestamp"
remove_field => "host"
remove_field => "@version"
}
}
}
output {
# the if .. is here because there are other types that are handled in this output since I centralize the input, filter, and output files to three distinct files.
if [type] == "odata" {
elasticsearch {
hosts => ["10.1.1.1:9200", "10.1.1.2:9200"]
sniffing => false
index => "odataindex"
document_type => "odatatype"
Solution
Being unfamiliar with Kibana, I was not aware of the time constraint on data on the default search/display of just 15 minutes. The data coming in was timestamped (@timestamp key) via the 'date' plugin with the original open date, NOT the time of the, via TCP port to elastic, insertion event; thus, no data was showing and I had no idea that by default only the past 15 minutes of data based on @timestamp were displayed. If I had read the part about the time constraint I would have known. So I just adjusted the time to go back infinite years and saw my data.
So, if anyone else is having this problem, it's probably because you created a time-dependent index and have not clicked the 'time' button in the top right corner and changed the time frame.
This, my friends, is why you read the manual!
So, if anyone else is having this problem, it's probably because you created a time-dependent index and have not clicked the 'time' button in the top right corner and changed the time frame.
This, my friends, is why you read the manual!
Context
StackExchange DevOps Q#2556, answer score: 1
Revisions (0)
No revisions yet.