First check Troubleshooting targets section above. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_wait-76bcd0771f3cc7b5f6b5f15f16ee01cc0c671fb047b93910271bc73e753e26ee.log, inode 1772861 [2022/03/24 04:19:38] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Otherwise, share steps to reproduce, including your config. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input chunk] update output instances with new chunk size diff=681 Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] task_id=12 assigned to thread #1 Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Expected behavior Minimally that these messages do not tie-up fluent-bit's pipeline as retrying them will never succeed. [SERVICE] Flush 5 Daemon Off Log_Level ${LOG_LEVEL} Parsers_File parsers.conf Plugins_File plugins.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 [INPUT] Name dummy Rate 1 Tag dummy.log [OUTPUT] Name stdout Match * [OUTPUT] Name kafka Match * Brokers ${BROKER_ADDRESS} Topics bit Timestamp_Key @timestamp Retry_Limit false # Specify the number of extra seconds to monitor a file once is . Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Good Afternoon, I currently have Fluentbit deployed in an AWS EKS cluster, per the documentation I see there is an option to send output to Splunk but does it support Splunk HEC forwarding? [2022/03/24 04:19:49] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:51] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 111 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Every time the website is updated a new manifest file is loaded from the backend, if there are newer files, the service worker will fetch all the new chunks and notify the user of the update. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772861 with offset=0 appended as /var/log/containers/hello-world-wpr5j_argo_wait-76bcd0771f3cc7b5f6b5f15f16ee01cc0c671fb047b93910271bc73e753e26ee.log Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=13 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [retry] new retry created for task_id=3 attempts=1 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/workflow-controller-bb7c78c7b-w2n5c_argo_workflow-controller-7f4797ff53352e50ff21cf9625ec02ffb226172a2a3ed9b0cee0cb1d071a2990.log, inode 34598688 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=1182 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [ warn] [engine] failed to flush chunk '1-1648192121.87279162.flb', retry in 6 seconds: task_id=15, input=tail.0 > output=es.0 (out_id=0), Hi, @lecaros still got error, already add setting "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"muMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [retry] re-using retry for task_id=2 attempts=2 Logstash_Format On [2022/03/24 04:19:34] [debug] [outputes.0] task_id=1 assigned to thread #0 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [out coro] cb_destroy coro_id=17 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=15 Output always starts working after the restart. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [outputes.0] task_id=4 assigned to thread #1 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 events: IN_ATTRIB Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=661 Fluentd will wait to flush the buffered chunks for delayed events. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [http_client] not using http_proxy for header just like @lifeofmoo mentioned, initially everything went well in OpenSearch then the issue of "failed to flush chunk" came out. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. {"took":2217,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"yeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. * Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:38] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [retry] new retry created for task_id=11 attempts=1 [2022/03/24 04:20:51] [error] [outputes.0] could not pack/validate JSON response Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1885001 removing file name /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log, tried 1.9.0, 1.8.15. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [task] created task=0x7ff2f183a660 id=12 OK Host 10.3.4.84 The output plugins group events into chunks. * Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 file has been deleted: /var/log/containers/hello-world-hxn5d_argo_main-ce2dea5b2661227ee3931c554317a97e7b958b46d79031f1c48b840cd10b3d78.log Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:43] [debug] [input chunk] update output instances with new chunk size diff=641 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"k-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [outputes.0] task_id=1 assigned to thread #1 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 11 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [http_client] not using http_proxy for header [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/argo-server-6d7cf9c977-dlwnk_argo_argo-server-7e1ccfbd60b7539a1b2984f2f46de601d567ce83e87d434e173df195e44b5224.log, inode 101715266 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header FluentD or Collector pods are throwing errors similar to the following: 2022-01-28T05:59:48.087126221Z 2022-01-28 05:59:48 +0000 : [retry_default] failed to flush the buffer. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] HTTP Status=200 URI=/_bulk We have now elastic errors when put fluent-bit in trace mode when mapping is wrong, strangely when the bulk of 5 MB contains 1000 events sent from fluent-bit when one event with wrong mapping all events are rejected by elasticsearch. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4OMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"KOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2021/11/17 17:18:07] [ warn] [engine] failed to flush chunk '1-1637166971.404071542.flb', retry in 771 seconds: task_id=346, input=tail.0 > output=es.0 (out_id=0) [2021/11/17 17:18:07] [ warn] [engine] failed to flush chunk '1-1637167230.683033285.flb', retry in 1844 seconds: task_id=481, input=tail.0 > output=es.0 (out_id=0) [2021/11/17 17:18:08] [ warn] [engine] failed to flush chunk '1 . Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:21:06] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=103386717 watch_fd=7 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048905 events: IN_ATTRIB Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=103386716 removing file name /var/log/containers/hello-world-6lqzf_argo_main-5f73e32f330b82717357220ce404309cd9c3f62e1d75f241f74cbc3086597fa4.log Retry_Limit False, [OUTPUT] [2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e38680 id=1 OK Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5uMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"KeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall 2021-04-26 15:58:10 +0000 [warn]: #0 failed to flush the buffer. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [outputes.0] task_id=2 assigned to thread #0 [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Expected behavior A clear and concise description of what you expected to happen. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_main-4522cea91646c207c4aa9ad008d19d9620bc8c6a81ae6135922fb2d99ee834c7.log, inode 34598706 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_wait-6b82c7411c8433b5e5f14c56f4b810dc3e25a2e7cfb9e9b107b9b1d50658f5e2.log, inode 67891711 fluentbit_output_proc_records_total. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"BuMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"3eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35353617 removing file name /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192130.216865179.flb', retry in 6 seconds: task_id=20, input=tail.0 > output=es.0 (out_id=0) To those having this same issue, can you share your config and log files with debug level enabled? Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Match kube. fluentbit fails to communicate with fluentd. I am seeing this in fluentd logs in kubernetes. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [retry] new retry created for task_id=12 attempts=1 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] task_id=18 assigned to thread #1 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_OMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance].
dan katz wedding,
german luftwaffe badges,
list of murders in hamilton, ontario,