-
Notifications
You must be signed in to change notification settings - Fork 292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eventsource problems with 1.2.1 #476
Comments
Any unusual beahior aside form the errors? Any worker crashes, anything else in the error log, etc? |
There are no server crashes or unexpected behavior in logs, except those messages. |
Would you be able to rebuild Nchan from master and check if this issue has been fixed? |
I rebuilded it with 1.2.1 from master. The issue is still not fixed here is log: |
Sorry, that doesn't make sense. Either you built Nchan version 1.2.1 or you built it from master.
This indicates there was a Nginx worker crash, and this is what I need to look at. Can you send me the coredump + nginx binary + OS version? If not, I can take a look at it on your server. Either way, email me at leo@nchan.io and we'll figure out how I can fix this bug. |
You should have an email regarding all the info. |
@shtinkov Got your email, but I was unable to read the coredump. Any chance you could look at it on your end in GDB and give me the output of "backtrace full"? |
Hi,
It's new core because i also was unable to do that.
So here is backtrace full:
gdb /usr/sbin/nginx /tmp/cores/core.nginx.16660
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/nginx...Reading symbols from
/usr/lib/debug/.build-id/25/d5f851e788026a9cc09b93c290ad3573e86f7b.debug...done.
done.
[New LWP 16660]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `nginx: worker process '.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 ngx_http_charset_get_buffer (pool=0x55a9b93421e0, ctx=0x55a9b938e2d0,
size=14)
at src/http/modules/ngx_http_charset_filter_module.c:1137
1137 src/http/modules/ngx_http_charset_filter_module.c: No such file or
directory.
(gdb) backtrace full
#0 ngx_http_charset_get_buffer (pool=0x55a9b93421e0, ctx=0x55a9b938e2d0,
size=14)
at src/http/modules/ngx_http_charset_filter_module.c:1137
b = <optimized out>
cl = 0x6
ll = 0x55a9b938e300
#1 0x000055a9b7d51b88 in ngx_http_charset_recode_from_utf8
(ctx=0x55a9b938e2d0, buf=0x55a9b9225010, pool=0x55a9b93421e0)
at src/http/modules/ngx_http_charset_filter_module.c:839
len = <optimized out>
dst = <optimized out>
i = <optimized out>
ll = <optimized out>
size = <optimized out>
table = <optimized out>
n = 4294967295
b = <optimized out>
out = <optimized out>
cl = <optimized out>
c = 0 '\000'
saved = 0x55a9b938e311 "tion05/0/036U"
p = <optimized out>
src = 0x7f5598197349 "id: "
#2 ngx_http_charset_body_filter (r=<optimized out>, in=<optimized out>) at
src/http/modules/ngx_http_charset_filter_module.c:587
rc = <optimized out>
b = 0x55a9b9225010
cl = 0x55a9b9225090
out = 0x0
ll = 0x7ffe861cdf20
ctx = 0x55a9b938e2d0
#3 0x000055a9b7d52ad1 in ngx_http_trailers_filter (r=0x55a9b9342230,
in=0x55a9b9225090)
at src/http/modules/ngx_http_headers_filter_module.c:274
value = {len = 140005588829984, data = 0x7f55995fe8cd
<___sprintf_chk+125> "H\201\304", <incomplete sequence \330>}
i = <optimized out>
safe_status = <optimized out>
cl = <optimized out>
t = <optimized out>
h = <optimized out>
conf = 0x55a9b927fd60
#4 0x000055a9b7ce4e82 in ngx_output_chain (ctx=ctx@entry=0x55a9b9224dc0,
in=in@entry=0x55a9b9225000)
at src/core/ngx_output_chain.c:214
bsize = <optimized out>
last = <optimized out>
cl = <optimized out>
out = 0x55a9b9225090
last_out = 0x55a9b9224e40
#5 0x000055a9b7d537a5 in ngx_http_copy_filter (r=0x55a9b9342230,
in=0x55a9b9225000) at src/http/ngx_http_copy_filter_module.c:152
rc = <optimized out>
c = 0x55a9b933f8d8
ctx = 0x55a9b9224dc0
clcf = <optimized out>
conf = <optimized out>
#6 0x000055a9b7d2019a in ngx_http_output_filter (r=r@entry=0x55a9b9342230,
in=0x55a9b9225000)
---Type <return> to continue, or q <return> to quit---
at src/http/ngx_http_core_module.c:1770
rc = <optimized out>
c = 0x55a9b933f8d8
#7 0x00007f5598151457 in nchan_output_filter_generic (r=0x55a9b9342230,
msg=msg@entry=0x7f5597638600, in=<optimized out>)
at ./debian/modules/nchan/src/util/nchan_output.c:261
clcf = <optimized out>
rc = <optimized out>
wev = 0x55a9b933fa28
c = 0x55a9b933f8d8
ctx = 0x55a9b9224988
#8 0x00007f55981519e5 in nchan_output_msg_filter (r=<optimized out>,
msg=msg@entry=0x7f5597638600, in=<optimized out>)
at ./debian/modules/nchan/src/util/nchan_output.c:303
No locals.
#9 0x00007f5598160ecc in es_respond_message (sub=0x55a9b9384f00,
msg=<optimized out>)
at ./debian/modules/nchan/src/subscribers/eventsource.c:240
fsub = 0x55a9b9384f00
cur = <optimized out>
last = <optimized out>
msg_buf = <optimized out>
databuf = {
pos = 0x7f55976386d1 "{\"device_id\":35, \"purpose\":19,
\"qtype\":\"async\", \"value\":\"\",\"channel\":\"5/19/35\"}",
last = 0x7f5597638720 "", file_pos = 0, file_last = 0,
start = 0x7f55976386d1 "{\"device_id\":35, \"purpose\":19,
\"qtype\":\"async\", \"value\":\"\",\"channel\":\"5/19/35\"}", end =
0x7f5597638720 "", tag = 0x55a9b7d2f010
<ngx_http_read_client_request_body>, file = 0x0, shadow = 0x0, temporary =
1,
memory = 0, mmap = 0, recycled = 0, in_file = 0, flush = 0, sync
= 0, last_buf = 0, last_in_chain = 0, last_shadow = 0,
temp_file = 0, num = 0}
bc = <optimized out>
first_link = 0x55a9b9225000
last_link = 0x55a9b9224ec8
msgid = {len = 12, data = 0x55a9b9396800 "1538895186:0"}
id_line = {len = 4, data = 0x7f5598197349 "id: "}
event_line = {len = 7, data = 0x7f5598197341 "event: "}
ctx = 0x55a9b9224988
#10 0x00007f5598169889 in spool_respond_general
(self=self@entry=0x55a9b9399a18,
msg=msg@entry=0x7f5597638600, code=code@entry=0,
code_data=code_data@entry=0x0, notice=notice@entry=0) at
./debian/modules/nchan/src/store/spool.c:630
numsubs = {0, 0, 0, 0, 0, 0, 0, 0}
nsub = <optimized out>
nnext = 0x0
sub = <optimized out>
#11 0x00007f559816ada3 in spooler_respond_message (self=0x55a9b9344268,
msg=0x7f5597638600)
at ./debian/modules/nchan/src/store/spool.c:965
srdata = {min = {time = 0, tag = {fixed = {0, 0, 0, 0}, allocd =
0x0}, tagactive = 0, tagcount = 1}, max = {
time = 1538895186, tag = {fixed = {0, 0, 0, 0}, allocd = 0x0},
tagactive = 0, tagcount = 1}, multi = 1 '\001', n = 0,
msg = 0x7f5597638600, spools = {0x55a9b9399a18, 0x55a9b930fba0,
0xcfeb7cf85d5f7e00, 0x720000007b, 0x1, 0x55a9b93a90c0,
0x55a9b93b84a0, 0x55a9b93a88e0, 0x55a9b93a90c0, 0x55a9b93a88e0,
0x7f5599f72ea3, 0x120, 0x1700000000, 0x0, 0x0,
0x1000b921def0, 0x0, 0x55a9b939ff60, 0x55a9b93a99d0,
0x55a9b93a99d8, 0x55a9b93a99d4, 0x55a9b93a99dc, 0x120,
0x55a9b93b0cd0, 0xcfeb7cf85d5f7e00, 0x55a9b939c9f0,
0xcfeb7cf85d5f7e00, 0x1,
0x7f5598175775 <memstore_ensure_chanhead_is_ready+277>,
0x55a9b93b84a0, 0x55a9b9344200, 0x7f5597637070},
overflow = 0x0}
spool = <optimized out>
responded_subs = <optimized out>
---Type <return> to continue, or q <return> to quit---
__PRETTY_FUNCTION__ = "spooler_respond_message"
#12 0x00007f559817708f in nchan_memstore_publish_generic
(head=0x55a9b9344200, msg=<optimized out>,
status_code=status_code@entry=0, status_line=status_line@entry=0x0) at
./debian/modules/nchan/src/store/memory/memstore.c:1405
shared_sub_count = 1
__PRETTY_FUNCTION__ = "nchan_memstore_publish_generic"
#13 0x00007f559816e91e in receive_publish_message (sender=3,
d=0x7ffe861ce4c0)
at ./debian/modules/nchan/src/store/memory/ipc-handlers.c:443
cd_data = {sender = 0, d = 0x55a9b7cefd7f <ngx_time_update+175>,
allocd = 0}
cd = <optimized out>
head = <optimized out>
__PRETTY_FUNCTION__ = "receive_publish_message"
#14 0x00007f559816c65f in ipc_read_handler (ev=0x55a9b930f720) at
./debian/modules/nchan/src/store/memory/ipc.c:457
alert = {
data =
"`pc\227U\177\000\000\000\206c\227U\177\000\000\370\375'\271\251U\000\000\060u\026\230U\177",
'\000' <repeats 12 times>,
"\001\000\000\000\000\000\300\020\024\230U\177\000\000\360\312\071\271\251U\000",
time_sent = 1538895186, src_slot = 3,
worker_generation = 0, code = 5 '\005'}
c = 0x55a9b92d3950
ev = 0x55a9b930f720
#15 0x000055a9b7d0d55c in ngx_epoll_process_events (cycle=0x55a9b921ed20,
timer=<optimized out>, flags=<optimized out>)
at src/event/modules/ngx_epoll_module.c:902
events = 1
revents = 1
instance = <optimized out>
i = 0
level = <optimized out>
err = <optimized out>
rev = 0x55a9b930f720
wev = <optimized out>
queue = <optimized out>
c = 0x55a9b92d3950
#16 0x000055a9b7d0258a in ngx_process_events_and_timers
(cycle=cycle@entry=0x55a9b921ed20)
at src/event/ngx_event.c:242
flags = <optimized out>
timer = <optimized out>
delta = 5093272275
#17 0x000055a9b7d0b355 in ngx_worker_process_cycle
(cycle=cycle@entry=0x55a9b921ed20,
data=data@entry=0x0)
at src/os/unix/ngx_process_cycle.c:750
worker = 0
#18 0x000055a9b7d0976f in ngx_spawn_process (cycle=cycle@entry=0x55a9b921ed20,
proc=proc@entry=0x55a9b7d0b300 <ngx_worker_process_cycle>,
data=data@entry=0x0,
name=name@entry=0x55a9b7d8658b "worker process", respawn=respawn@entry=-3)
at src/os/unix/ngx_process.c:199
on = 1
pid = 0
s = 0
#19 0x000055a9b7d0aa00 in ngx_start_worker_processes
(cycle=cycle@entry=0x55a9b921ed20,
n=4, type=type@entry=-3)
at src/os/unix/ngx_process_cycle.c:359
i = 0
ch = {command = 1, pid = 0, slot = 0, fd = 0}
#20 0x000055a9b7d0c09d in ngx_master_process_cycle (cycle=0x55a9b921ed20)
at src/os/unix/ngx_process_cycle.c:131
title = 0x55a9b92cdbdc "master process /usr/sbin/nginx -g daemon
on; master_process on;"
p = <optimized out>
size = <optimized out>
---Type <return> to continue, or q <return> to quit---
i = <optimized out>
n = <optimized out>
sigio = <optimized out>
set = {__val = {0 <repeats 16 times>}}
itv = {it_interval = {tv_sec = 94187443850632, tv_usec = 0},
it_value = {tv_sec = 0, tv_usec = 0}}
live = <optimized out>
delay = <optimized out>
ls = <optimized out>
ccf = 0x55a9b92202f0
#21 0x000055a9b7ce0368 in main (argc=<optimized out>, argv=<optimized out>)
at src/core/nginx.c:382
b = <optimized out>
log = 0x55a9b7fc8320 <ngx_log>
i = <optimized out>
cycle = 0x55a9b921ed20
init_cycle = {conf_ctx = 0x0, pool = 0x55a9b921d1b0, log =
0x55a9b7fc8320 <ngx_log>, new_log = {log_level = 0, file = 0x0,
connection = 0, disk_full_time = 0, handler = 0x0, data = 0x0,
writer = 0x0, wdata = 0x0, action = 0x0, next = 0x0},
log_use_stderr = 0, files = 0x0, free_connections = 0x0,
free_connection_n = 0, modules = 0x0, modules_n = 0,
modules_used = 0, reusable_connections_queue = {prev = 0x0, next
= 0x0}, reusable_connections_n = 0, listening = {
elts = 0x55a9b921d780, nelts = 4, size = 240, nalloc = 10, pool
= 0x55a9b921d1b0}, paths = {elts = 0x0, nelts = 0,
size = 0, nalloc = 0, pool = 0x0}, config_dump = {elts = 0x0,
nelts = 0, size = 0, nalloc = 0, pool = 0x0},
config_dump_rbtree = {root = 0x0, sentinel = 0x0, insert = 0x0},
config_dump_sentinel = {key = 0, left = 0x0,
right = 0x0, parent = 0x0, color = 0 '\000', data = 0 '\000'},
open_files = {last = 0x0, part = {elts = 0x0,
nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0},
shared_memory = {last = 0x0, part = {elts = 0x0,
nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0},
connection_n = 0, files_n = 0, connections = 0x0,
read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file
= {len = 21,
data = 0x55a9b7d815eb "/etc/nginx/nginx.conf"}, conf_param =
{len = 29, data = 0x7ffe861cee8e "ss"}, conf_prefix = {
len = 11, data = 0x55a9b7d815eb "/etc/nginx/nginx.conf"},
prefix = {len = 17,
data = 0x55a9b7d815d9 "/usr/share/nginx/"}, lock_file = {len =
0, data = 0x0}, hostname = {len = 0, data = 0x0}}
cd = <optimized out>
ccf = 0x55a9b92202f0
…On Sat, Oct 6, 2018 at 11:29 PM slact ***@***.***> wrote:
@shtinkov <https://github.com/shtinkov> Got your email, but I was unable
to read the coredump. Any chance you could look at it on your end in GDB
and give me the output of "backtrace full"?
Alternately, I can take a look at it on your server if we can set that up.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#476 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AIKS3ry27TwCP5AVTeRV81H9FC4FEJfAks5uiRLEgaJpZM4VtaeE>
.
|
Unusual are these messages:
2018/10/07 09:53:24 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B9233320 /1/3/0 to chanhead_gc. why?
2018/10/07 09:53:24 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B93DE010 /2/3/0 to chanhead_gc. why?
2018/10/07 09:53:24 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B93F38F0 /3/3/0 to chanhead_gc. why?
2018/10/07 09:53:24 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B9405D60 /5/3/0 to chanhead_gc. why?
2018/10/07 09:53:24 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B9418310 /6/3/0 to chanhead_gc. why?
2018/10/07 09:53:24 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B94302F0 /8/3/0 to chanhead_gc. why?
2018/10/07 09:53:30 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B93DE010 /2/3/0 to chanhead_gc. why?
2018/10/07 09:53:30 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B9405D60 /5/3/0 to chanhead_gc. why?
2018/10/07 09:53:36 [error] 16661#16661: MEMSTORE:01: tried adding WAITING
chanhead 000055A9B93DE010 /2/3/0 to chanhead_gc. why?
and this:
18/10/07 09:59:31 [error] 16663#16663: MEMSTORE:03: force-reaping msg with
refcount 1
This is error.log during this session of nginx server.
I hope it will give you more information
Best Regards!
…On Fri, Aug 3, 2018 at 11:21 PM slact ***@***.***> wrote:
Any unusual beahior aside form the errors? Any worker crashes, anything
else in the error log, etc?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#476 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AIKS3lEYXp_PNpmXUdGomc38DTE3glsOks5uNLCwgaJpZM4VtaeE>
.
|
Looking into it, I'll get back to you in a few days. |
Hi,
I have a working eventsource configuration with nchan 1.1.15. I upgraded it to nchan 1.2.1, but it didn't worked.
I got the following error messages in error.log:
2018/08/03 08:33:45 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A8162C0 /031B021C-040D-0559-B606-BE0700080009 to chanhead_gc. why?
2018/08/03 08:33:45 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A7A2240 /03D502E0-045E-0500-0206-E70700080009 to chanhead_gc. why?
2018/08/03 08:33:45 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A7C3E60 /9DE38514-44D5-25B2-64B6-B06EBF30BCA9 to chanhead_gc. why?
2018/08/03 08:33:45 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A69FEC0 /04DBB790-6509-4167-D231-38D547E17611 to chanhead_gc. why?
2018/08/03 08:33:45 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A4F7350 /ECBC7A90-B981-2CF0-1759-3497F695BB22 to chanhead_gc. why?
2018/08/03 08:33:45 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663ACBC7D0 /A36CE1C7-7792-1BAD-A69A-107B444B5970 to chanhead_gc. why?
2018/08/03 08:33:45 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A759730 /CC8B5E10-A25A-764A-4DB5-9C5C8E85EA8A to chanhead_gc. why?
2018/08/03 08:33:45 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A58A640 /8BD09163-7B05-ED58-AA16-107B4449EC37 to chanhead_gc. why?
2018/08/03 08:33:46 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A54C7D0 /0DD126CD-373F-63DB-F7AC-107B444A044D to chanhead_gc. why?
2018/08/03 08:33:46 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A4CBCD0 /031B021C-040D-055A-A806-D30700080009 to chanhead_gc. why?
2018/08/03 08:33:46 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A84A0C0 /03D502E0-045E-0500-0106-030700080009 to chanhead_gc. why?
2018/08/03 08:33:46 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A7660A0 /FC809F30-ED68-CB11-D8F6-2C4D54D043C1 to chanhead_gc. why?
2018/08/03 08:33:46 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A738600 /031B021C-040D-055B-4E06-6F0700080009 to chanhead_gc. why?
2018/08/03 08:33:46 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A4D95F0 /03467603-FB8D-FB29-D6FB-2C4D54D0539D to chanhead_gc. why?
2018/08/03 08:33:46 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A5939C0 /CC8B5E10-A25A-764A-4DB5-F832E4BB5CC5 to chanhead_gc. why?
2018/08/03 08:33:46 [error] 15366#15366: MEMSTORE:01: tried adding WAITING chanhead 000055663A7E40C0 /CBA2EDD6-C2E5-D804-3C74-2C4D54D0491E to chanhead_gc. why?
My nginx is 1.14.0 runing on debian 9.5
Here is my nchan related configuration:
location /miner/publish {
access_log off;
nchan_channel_id $arg_channel;
nchan_message_buffer_length 200;
nchan_message_timeout 10s;
nchan_publisher;
}
location /miner/subscribe {
internal;
nchan_channel_id $arg_channel;
nchan_subscriber eventsource;
nchan_subscriber_first_message newest;
nchan_subscribe_request /miner/post_msg;
}
location = /miner/post_msg {
internal;
access_log off;
proxy_pass $scheme://127.0.0.1:$server_port/miner/publish?channel=$arg_postchannel;
proxy_cache off;
proxy_send_timeout 3s;
proxy_method POST;
proxy_set_header Accept text/json;
proxy_set_header Content-Type text/json;
proxy_set_body "{"msg":"$arg_msg","channel":"$arg_channel"}";
}
The text was updated successfully, but these errors were encountered: