Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Server 2008 r2 Nginx sending < 1mbps

$
0
0
Recently setup Nginx Gryphon on a windows box, nothing changed in the config other than adding the rtmp section.

Exact same zip/setup on 3 other machines, output is great, between 4-5mbps. On this server 2008 r2 box, I cant get more than 1mpbs-ish out.

Port 1935 added to firewall, even tried firewall entirely off. It's not being blocked, because I see it sending some data in the resource monitor for nginx.exe but, can never recieve more than 1mpbs if at all.

I added a 500mb file to the http dir, and it downloads at > 10mbps (but that uses a different port).

The host spoke with the data center, and assures me there is no throttling of rtmp or 1935 traffic.

Hoping someone here might have some insight on what is going on here. Thanks!

please delete me from your mailing list

$
0
0
--
R. A. "Andy" Millar
920 S. Main
PO Box 388
Milton-Freewater, OR 97862
541 938-4485 fax 541 938-0328
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Adding Form Module to NGINX

$
0
0
Hi All,

I'm having issues adding this functionality to my NGINX webserver : https://www.nginx.com/resources/wiki/modules/form_input/

So far, I've downloaded the ngx_devel_kit source code, putting it in /etc/nginx. I tried using 'tar -xzf' to unpack the code (this did nothing, I guess it's already unpacked), and then tried using '--add-module=/etc/nginx/ngx_devel_kit-master' which also did nothing, because I guess it needs a file, not a directory, specified.

I'm not sure what to try next. Any help getting this module working would be greatly appreciated.

The whole point of this is just to handle form data.

Thanks.
Alex

"Client prematurely closed connection" error

$
0
0
I am using nginx version 1.16.1 as a forward proxy on a CentOS 7 server. Our order inventory server (ORDINV) connects through the nginx server (TAXPROXY) to request sales tax info from a cloud server. We are in test mode with this configuration. This all has been working well, except when they stop testing for a while, the taxproxy server stops working. The error log shows


2020/02/17 08:33:07 [info] 879#879: 42826 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while SSL handshaking to upstream, client: xxx.xx.x.xx, server: taxproxy.umpublishing.org, request: "POST / HTTP/1.1", upstream: "https://yy.yyy.yyy.yy:443/services/CalculateTax70", host: "zzz.zzz.z.zz:80"

After a couple of these info there is [error] "peer closed connection in SSL handshake (104:Connection reset by peer) while SSL handshaking to upstream"

The nginx service is running, and if I issue a systemctl restart nginx, everything starts working again fine. Any ideas what might be wrong? Googling turned up several sites where people reported similar problems but no one had gotten an answer....

TIA~
Cindy

simple file based authorization

$
0
0
My nginx config file is as follows:

server {
...
location / {
....
auth_request /custom_auth
....
}
location /custom_auth {
proxy_pass http://localhost:9123;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header Host $host;
proxy_set_header X-Original-URI $request_uri;
}
}

Client will provide URL (URL is 3rd party application) and username. URL contains project name. I have a simple file in the same server with project and username mapping. If mapping exists allow the URL to execute, otherwise fail it.

How can I implement this in http://localhost:9123? Using oauth2? when I checked many sample codes, it talks about passwords, tokens, etc., Can this done in a much simpler manner?

Redirect URL before reaching target

$
0
0
Hi,
I have searched around and tried tons of different tips and trix without any success. I have an Nginx reverse proxy installed which are working fine. All my urls are working as they should including redirect to different services on the same server. (That is why I need the reverse proxy)

However I have one issue. My unifi controller send out emails to my customers when an AP or other equipment goes down with a link to unifi.xx.se:8443 and when that is used my letsencrypt certificate will not be in use, but it is in use for both http://unifi.xx.se and https://unifi.xx.se.

I have this configuration in my /etc/nginx/sites-enabled/xx.conf


server {
listen 80;
server_name unifi.xx.se;
return 301 https://$host$request_uri;
}

# For ssl
server {
ssl on;
ssl_certificate /etc/letsencrypt/live/unifi.xx.se/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/unifi.xx.se/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;

default_type application/octet-stream;

listen 443;
server_name unifi.xx.se;

root /var/www/unifi.xx.se;

location ~ /.well-known {
allow all;
}

location / {
include proxy_params;
proxy_pass https://10.42.0.185:8443;
}
}


Is it possible to change the request before sending it further from unifi.xx.se:8443 to unifi.xx.se?

Re: Nginx Valid Referer - Access Control - Help Wanted

$
0
0
Francis Daly Wrote:
-------------------------------------------------------
> On Thu, Feb 06, 2020 at 06:02:50PM -0500, AshleyinSpain wrote:
>
> Hi there,
>
> > > > server {
> > > > location /radio/ {
> > > > valid_referers none blocked server_names ~\.mysite\.;
> > > > if ($invalid_referer) { return 403; }
> > > > }
> > > > }
>
> > I deleted the 'none' and 'blocked' and no difference still not
> blocking
> > direct access to the URL
> >
> > Tried adding it in its own block and adding it to the end of an
> existing
> > block neither worked
> >
> > Is the location /radio/ part ok
> >
> > I am trying to block direct access to any URL with a directory
> /radio/
> >
> > The URLs look like sub.domain.tld/radio/1234/mytrack.mp3?45678901
>
> In nginx, one request is handled in one location.
>
> If /radio/ is the location that you configured to handle this request,
> then the config should apply.
>
> If you have, for example, "location ~ mp3", then *that* would probably
> be the location that is configured to handle this request (and so that
> is where this "return 403;" should be.
>
> You could try changing the line to be "location ^~ /radio/ {", but
> without knowing your full config, it is hard to know if that will fix
> things or break them.
>
> http://nginx.org/r/location
>
> > I need it so the URL is only served if a link on *.mysite.* is
> clicked ie
> > the track is only played through an html5 audio player on mysite
>
> That is not a thing that can be done reliably.
>
> If "unreliable" is good enough for you, then carry on. Otherwise, come
> up with a new requirement that can be done.
>
> Cheers,
>
> f
> --
> Francis Daly francis@daoine.org
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

Hi Francis

I've added further comments here, it's getting a bit messy above

I added, as you suggested, the ^~ to /radio/ and it now blocks it redirecting to where I put in the invalid_referer bit

The valid_referer part doesn't work though,

valid_referers server_names
*.mysite.com mysite.com dev.mysite.* can.mysite.* can.mysite.com/dashboard
~\.mysite\.;

it doesn't recognise the parameters or urls

I copied the examples in the docs and I have tried loads of variations taken from various suggestions etc online

When you say above - That is not a thing that can be done reliably is that because the headers can be 'forged' or it just doesn't work properly

I am only trying to stop casual copy stream url and paste it into browser to listen for free - I realise any determined person can get around it, but not trying to stop that with this - ultimately I will have to add more robust controls with JS and passwords but that will be later on down the line

Do you need me to copy the entire nginx config here

Thanks for your help

Ashley

Re: DNS load balancing issue

$
0
0
Hi Maxim,

Thanks for responding. I agree with your recommendation. I guess a direct upgrade from 1.12 to 1.16 (free community version) is possible and shouldn't break it.

I'm preferring 1.61 since it's the latest stable version. Beside the upgrade, do you recommend any performance tuning should be done?

Thanks


FYI - This is the error I see in the "dns.log" occurring frequently.

2020/02/19 16:47:28 [error] 19509#0: *4852298929 no live upstreams while connecting to upstream, udp client: x.x.x.x, server: 0.0.0.0:53, upstream: "dns_servers", bytes from/to client:50/0, bytes from/to upstream:0/0

This is the nginx.conf ---


worker_processes auto;
error_log /var/log/nginx/error.log;

include /usr/share/nginx/modules/*.conf;

events {
}

stream {
upstream dns_servers {
server x.x.x.x:53 fail_timeout=60s;
server x.x.x.x:53 fail_timeout=60s;
server x.x.x.x:53 fail_timeout=60s;
}
server {
listen 53 udp;
listen 53; #tcp
proxy_pass dns_servers;
error_log /var/log/nginx/dns.log info;
proxy_responses 1;
proxy_timeout 5s;
}
}

http {
index index.html;
server {
listen 80 default_server;
server_name _;
access_log /var/log/nginx/access.log;
server_name_in_redirect off;
root /var/www/default/htdocs;
allow x.x.x.x;
deny all;
location /nginx_status {
stub_status on;
access_log off;
}
}
}

shared location for fastcgi_temp_path and client_body_temp_path

$
0
0
Hello,

We are running several instances of nginx for the same application. We would like to knox if it is safe to share a common storage location for thoses instances for fastcgi_temp_path and client_body_temp_path parameters.
Are the chunk files generated unique for each instance ?

Regards

Stephane Durieux
DSI - Pôle infrastructure
Université Claude Bernard

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx php reverse proxy problem

$
0
0
Thanks, I just tried and it didn't work.
If I use the ip to access I don't have any problem, when it goes throw nginx is the problem.

Maybe are there some parameters in the php config of my server that I need to change?

Re: Nginx Valid Referer - Access Control - Help Wanted

$
0
0
On Wed, Feb 19, 2020 at 06:30:39PM -0500, AshleyinSpain wrote:
> Francis Daly Wrote:
> > On Thu, Feb 06, 2020 at 06:02:50PM -0500, AshleyinSpain wrote:

Hi there,

> > > I am trying to block direct access to any URL with a directory
> > /radio/
> > >
> > > The URLs look like sub.domain.tld/radio/1234/mytrack.mp3?45678901

> > > I need it so the URL is only served if a link on *.mysite.* is
> > clicked ie
> > > the track is only played through an html5 audio player on mysite
> >
> > That is not a thing that can be done reliably.

> The valid_referer part doesn't work though,
>
> valid_referers server_names
> *.mysite.com mysite.com dev.mysite.* can.mysite.*
> can.mysite.com/dashboard
> ~\.mysite\.;
>
> it doesn't recognise the parameters or urls

Can you show exactly what you means by "doesn't work"? It seems to work
for me.

That is, if I use

===
server {
listen 8080 default_server;
server_name three;
location ^~ /radio/ {
valid_referers server_names
*.mysite.com mysite.com dev.mysite.* can.mysite.*
can.mysite.com/dashboard ~\.mysite\.;
if ($invalid_referer) { return 403; }
return 200 "This request is allowed: $request_uri, $http_referer\n";
}
}
===

then I see (403 is "blocked"; 200 is "allowed"):

# no Referer
$ curl -i http://127.0.0.1:8080/radio/one
403

# Referer that matches can.mysite.*
$ curl -i -H Referer:http://can.mysite.cxx http://127.0.0.1:8080/radio/one
200

# Referer that does not match can.mysite.com/dashboard
curl -i -H Referer:http://can.mysite.com/dashboar http://127.0.0.1:8080/radio/one
403

# Referer that matches can.mysite.com/dashboard
curl -i -H Referer:http://can.mysite.com/dashboards http://127.0.0.1:8080/radio/one
200

# Referer that matches a server_name
$ curl -i -H Referer:https://three http://127.0.0.1:8080/radio/one
200

> I copied the examples in the docs and I have tried loads of variations taken
> from various suggestions etc online

If you can show one specific config that you use; and one specific
request that you make; and the response that you get and how it is not
the response that you want; it will probably be easier to identify where
the problem is.

> When you say above - That is not a thing that can be done reliably is that
> because the headers can be 'forged' or it just doesn't work properly

The headers can be forged, just like I do above in the "curl" commands.

All the best,

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx php reverse proxy problem

$
0
0
On Thu, Feb 20, 2020 at 08:10:14AM -0500, adrian.hilt wrote:

Hi there,

> Thanks, I just tried and it didn't work.

What config do you use?

What request do you make?

What response do you get?

What response do you want instead?

> If I use the ip to access I don't have any problem, when it goes throw nginx
> is the problem.

I don't understand what that means.

Can you copy-paste the (e.g.) "curl -v" output for a working and failing
request? Feel free to edit any private data; but if you do, please edit
it consistently.

> Maybe are there some parameters in the php config of my server that I need
> to change?

Maybe.

But guessing may not be the most efficient way to resolve the problem.

Cheers,

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[nginx] Disabled multiple Transfer-Encoding headers.

$
0
0
details: https://hg.nginx.org/nginx/rev/aca005d232ff
branches:
changeset: 7625:aca005d232ff
user: Maxim Dounin <mdounin@mdounin.ru>
date: Thu Feb 20 16:19:29 2020 +0300
description:
Disabled multiple Transfer-Encoding headers.

We anyway do not support more than one transfer encoding, so accepting
requests with multiple Transfer-Encoding headers doesn't make sense.
Further, we do not handle multiple headers, and ignore anything but
the first header.

Reported by Filippo Valsorda.

diffstat:

src/http/ngx_http_request.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -131,7 +131,7 @@ ngx_http_header_t ngx_http_headers_in[]

{ ngx_string("Transfer-Encoding"),
offsetof(ngx_http_headers_in_t, transfer_encoding),
- ngx_http_process_header_line },
+ ngx_http_process_unique_header_line },

{ ngx_string("TE"),
offsetof(ngx_http_headers_in_t, te),
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[nginx] Removed "Transfer-Encoding: identity" support.

$
0
0
details: https://hg.nginx.org/nginx/rev/fe5976aae0e3
branches:
changeset: 7626:fe5976aae0e3
user: Maxim Dounin <mdounin@mdounin.ru>
date: Thu Feb 20 16:19:34 2020 +0300
description:
Removed "Transfer-Encoding: identity" support.

The "identity" transfer coding has been removed in RFC 7230. It is
believed that it is not used in real life, and at the same time it
provides a potential attack vector.

diffstat:

src/http/ngx_http_request.c | 5 +----
1 files changed, 1 insertions(+), 4 deletions(-)

diffs (15 lines):

diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1952,10 +1952,7 @@ ngx_http_process_request_header(ngx_http
r->headers_in.content_length_n = -1;
r->headers_in.chunked = 1;

- } else if (r->headers_in.transfer_encoding->value.len != 8
- || ngx_strncasecmp(r->headers_in.transfer_encoding->value.data,
- (u_char *) "identity", 8) != 0)
- {
+ } else {
ngx_log_error(NGX_LOG_INFO, r->connection->log, 0,
"client sent unknown \"Transfer-Encoding\": \"%V\"",
&r->headers_in.transfer_encoding->value);
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[nginx] Disabled duplicate "Host" headers (ticket #1724).

$
0
0
details: https://hg.nginx.org/nginx/rev/4f18393a1d51
branches:
changeset: 7627:4f18393a1d51
user: Maxim Dounin <mdounin@mdounin.ru>
date: Thu Feb 20 16:51:07 2020 +0300
description:
Disabled duplicate "Host" headers (ticket #1724).

Duplicate "Host" headers were allowed in nginx 0.7.0 (revision b9de93d804ea)
as a workaround for some broken Motorola phones which used to generate
requests with two "Host" headers[1]. It is believed that this workaround
is no longer relevant.

[1] http://mailman.nginx.org/pipermail/nginx-ru/2008-May/017845.html

diffstat:

src/http/ngx_http_request.c | 12 ++++++++++--
1 files changed, 10 insertions(+), 2 deletions(-)

diffs (24 lines):

diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1755,10 +1755,18 @@ ngx_http_process_host(ngx_http_request_t
ngx_int_t rc;
ngx_str_t host;

- if (r->headers_in.host == NULL) {
- r->headers_in.host = h;
+ if (r->headers_in.host) {
+ ngx_log_error(NGX_LOG_INFO, r->connection->log, 0,
+ "client sent duplicate host header: \"%V: %V\", "
+ "previous value: \"%V: %V\"",
+ &h->key, &h->value, &r->headers_in.host->key,
+ &r->headers_in.host->value);
+ ngx_http_finalize_request(r, NGX_HTTP_BAD_REQUEST);
+ return NGX_ERROR;
}

+ r->headers_in.host = h;
+
host = h->value;

rc = ngx_http_validate_host(&host, r->pool, 0);
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: limit_req_zone Documentation Wrong

$
0
0
Hello Aidan,

> On 24 Jan 2020, at 22:17, Aidan Carson <aidan.kodi@gmail.com> wrote:
>
> Hello,
>
> I believe the documentation for the limit_req_zone directive on this page is wrong:
>
> http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
>
> It says that a rate parameter is not optional, but it is. The directive requires at least three parameters, but
>
> limit_req_zone $binary_remote_addr zone=limit:64k sync;
>
> or
>
> limit_req_zone $binary_remote_addr zone=limit:64k zone=limit:64k;
>
> are valid, omitting the rate. I see in the code that the default is 1r/s. Perhaps updating the documentation to list the default would be good, or changing the code to have the rate be required.
>
> Thank you,
>
> Aidan Carson

Thank you for your feedback on the docs. The “rate” parameter is assumed to be obligatory, though the syntax (http://nginx.org/r/limit_req_zone) may be constructed in a way to make it optional. For a common use case, the current behaviour is considered correct here, so the documentation would also be correct. I wouldn’t expect much changes here but let’s leave the latter to developers.

Best regards,
yar

[...]
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Live Activity dashboard

$
0
0
Hi,

Does the opensource nginx provide the live activity dashboard similar to the plus veriosn?

All I know is "stub_status".

Thanks

External service interaction DNS

$
0
0
Running a site using Nginx, as part of vulnerability scanning, we are getting reports of a DNS proxy form of exploit.

Essentially, it is possible to inject DNS lookups as part of the uri, GET request payload or even in Refer section of the HTTP header.

From Nginix perspective, wanted to know, if there is a way to prevent Nginix to attempt to resolve any DNS requests provided as part of URI, HTTP Refer or even User Agent attribute?

[crit] SSL_read_early_data() failed

$
0
0
Включил на своём сервере опцию "ssl_early_data". Всё вроде бы хорошо, но в error.log довольно много (порядка 0.5% от общего числа запросов, что на трафике в миллион уже немного напрягает) записей вида:
[crit] 11016#11016: *46796 SSL_read_early_data() failed (SSL: error:1423D06E:SSL routines:tls_parse_ctos_server_name:bad extension) while SSL handshaking, client: <ip_адрес_клиента>, server: 0.0.0.0:443

В связи с этим хотелось бы узнать:
1. Правильно ли я понимаю, что это проблемы со стороны клиентов (слишком старые клиенты?), и на сервере невозможно что-либо поделать для исправления этой ситуации?
2. Если так, то с какой целью данное извещение выводится с таким высоким уровнем приоритета (crit)
3. Можно ли как-то отключить вывод данного извещения (например, понизив его приоритет до warn) в логи для приведения их в прежний благопристойный вид?

nginx -V
nginx version: nginx/1.17.8
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1d 10 Sep 2019
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.17.8/debian/debuild-base/nginx-1.17.8=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

Nginx - 56 day old reverse-proxy suddenly unable to connect upstream.

$
0
0
I have nginx configured as a reverse proxy to Amazon's AWS IoT MQTT service. This was functioning well for almost 2 months, when suddenly 20 out of 32 instances of this stopped being able to connect upstream. We started seeing sporadic upstream SSL connection errors, followed by sporadic upstream connection refused, and then finally, mostly connection timeouts to upstream. Nothing short of a restart or reload of Nginx fixes this. Debug logging is not enabled, and trying to enable it replaces the worker processes, and effectively ends the issue. Over the next 3 days, the remaining nodes started exhibiting this problem as well. Rather than restarting nginx on these remaining nodes, I isolated them for study, and stood up new nodes to replace them.

But in studying these, I cannot find any indicator as to why this is happening. Now that these have been removed from client traffic, and I can test with curl's... I can hit one of these 5 times, and by the 5th call, I get a repro. Connection timeout to the upstream, resulting in a timeout to me.

==========================================================
Here is the version information for nginx, as it comes from Ubuntu 18.04:
nginx version: nginx/1.14.0 (Ubuntu)
built with OpenSSL 1.1.1 11 Sep 2018
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fdebug-prefix-map=/build/nginx-GkiujU/nginx-1.14.0=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-mail=dynamic --with-mail_ssl_module

==========================================================
nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
worker_rlimit_nofile 30500;

events {
worker_connections 10000;
# multi_accept on;
}

http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

#IPV6 also disabled via kernel boot option and sysctl, too.
#Couldn't get nginx to stop AAAA lookups without doing that.
resolver 8.8.8.8 8.8.4.4 valid=3s ipv6=off;
resolver_timeout 10;
# enable reverse proxy
proxy_redirect off;
proxy_set_header Host CENSORED.amazonaws.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;

access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log error;

gzip on;

# Nginx-lua-prometheus
# Prometheus metric library for Nginx
lua_shared_dict prometheus_metrics 10M;
lua_package_path "/etc/nginx/nginx-lua-prometheus/?.lua";
init_by_lua '
prometheus = require("prometheus").init("prometheus_metrics")
metric_requests = prometheus:counter(
"nginx_http_requests_total", "Number of HTTP requests", {"host", "status"})
metric_latency = prometheus:histogram(
"nginx_http_request_duration_seconds", "HTTP request latency", {"host"})
metric_connections = prometheus:gauge(
"nginx_http_connections", "Number of HTTP connections", {"state"})
';
log_by_lua '
metric_requests:inc(1, {ngx.var.server_name, ngx.var.status})
metric_latency:observe(tonumber(ngx.var.request_time), {ngx.var.server_name})
';

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

==========================================================
iot-proxy config file:
# Define group of backend / upstream servers:
upstream iot-backend
{
server CENSORED.amazonaws.com:443;
}

server
{
#listen 443 default ssl;
listen 443 ssl;
server_name CENSORED.something.com;

ssl_session_cache shared:SSL:1m;
ssl_session_timeout 86400;
ssl_certificate /etc/nginx/ssl/CENSORED.crt;
ssl_certificate_key /etc/nginx/ssl/CENSORED.key;
ssl_verify_client off;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

location /
{
proxy_pass https://iot-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host "CENSORED.amazonaws.com:443";
proxy_read_timeout 86400;
proxy_ssl_session_reuse off;
}
}

==========================================================
nginx-lua-prometheus config file:
server {
listen 9145;
allow 0.0.0.0/0;
allow 127.0.0.1/32;
deny all;
location /metrics {
content_by_lua '
metric_connections:set(ngx.var.connections_reading, {"reading"})
metric_connections:set(ngx.var.connections_waiting, {"waiting"})
metric_connections:set(ngx.var.connections_writing, {"writing"})
prometheus:collect()
';
}
}
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>