Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Re: Multiple Cache Manager Processes or Threads

$
0
0
Sorry, I gave wrong values:

On the beginning, ram cache is correctly purge around 300GB (+/- Input bandwidth*10sec) , but when the hdd cache begins to fill up, ram cache growing over 320GB.

Plan to support proxy protocol v2?

$
0
0
Hi,

The aws elbv2 just works with proxy protocol v2. Is there any plan to support this version in nginx soon?

regards

Re: Overridable header values (with map?)

$
0
0
On Thu, Nov 30, 2017 at 9:45 AM, Maxim Dounin <mdounin@mdounin.ru> wrote:

> Hello!
>

Hi!

The error in question will only appear if you don't have the
> variable defined at all, that is, it is not used anywhere in your
> configuration. Using it at least somewhere will resolve the
> error. That is, just add something like
>
> set $robots off;
>
> anywhere in your configuration as appopriate (for example, in the
> default server{} block).
>
> Once you will be able to start nginx, you'll start getting
> warnings when the variable is used uninitialized, e.g.:
>
> ... [warn] ... using uninitialized "robots" variable ...
>
> These warnings can be switched off using the
> uninitialized_variable_warn directive, see
> http://nginx.org/r/uninitialized_variable_warn.


That worked perfectly! Thank you very much!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Multiple Cache Manager Processes or Threads

$
0
0
Hello!

On Thu, Nov 30, 2017 at 12:20:19PM -0500, traquila wrote:

> I have an issue with the cache manager and the way I use it.
> When I configure 2 different caches zones, one very huge and one very fast,
> the cache manager can't delete files quickly enough and lead to a partition
> full.
>
> For example:
> proxy_cache_path /mnt/hdd/cache levels=1:2:2 keys_zone=cache_hdd:40g
> max_size=40000g inactive=5d;
> proxy_cache_path /mnt/ram/cache levels=1:2 keys_zone=cache_ram:300m
> max_size=300g inactive=1h;
>
> On the beginning, ram cache is correctly purge around 40GB (+/- Input
> bandwidth*10sec) , but when the hdd cache begins to fill up, ram cache
> growing over 50GB. I think the cache manager is stuck by the slowness of the
> filesystem / hardware.
>
> I can fix this by using 2 nginx on the same machine, one configured as ram
> cache, the other as hdd cache; but I wonder if it would be possible to
> create a cache manager process for each proxy_cache_path directive.

Which nginx version you are using?

With nginx 1.11.5+, there are manager_files / manager_sleep /
manager_threshold parameters you may want to play with, see
http://nginx.org/r/proxy_cache_path. These parameters allows
limiting cache manager's work on a particular cache to some finite
time, and therefore help to better maintain specified max_size of
other caches.

If you are using an older version, an upgrade to the recent
version might help even without further tuning, as older versions
do not limit cache manager's work on a particular cache at all.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Re: How to control the total requests in Ngnix

$
0
0
On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan@migu.cn wrote:

Hi there,

> what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.

Any $variable might be different in different connections. Any fixed
string will not be.

So:

limit_conn_zone "all" zone=all...

for example.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 回复: How to control the total requests in Ngnix

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Return 408 to ELB

$
0
0
I am running into an issue, that I believe was documented here (https://trac.nginx.org/nginx/ticket/1005).

Essentially, I am seeing alerts as our ELBs are sending 504s back to clients with no backend information attached, but when I look through our nginx request logs, I see that we "should have" sent them a 408. However, it appears that nginx is just closing the connection.

We are using keep-alive connections, and I was looking at using the reset_timedout_connection parameter, but based on the documentation it doesn't seem like this will help.

Is there a way to actually send a 408 back to the client using nginx and ELBs?

Re: Return 408 to ELB

$
0
0
Hello!

On Thu, Nov 30, 2017 at 02:02:27PM -0500, reverson wrote:

> I am running into an issue, that I believe was documented here
> (https://trac.nginx.org/nginx/ticket/1005).
>
> Essentially, I am seeing alerts as our ELBs are sending 504s back to clients
> with no backend information attached, but when I look through our nginx
> request logs, I see that we "should have" sent them a 408. However, it
> appears that nginx is just closing the connection.
>
> We are using keep-alive connections, and I was looking at using the
> reset_timedout_connection parameter, but based on the documentation it
> doesn't seem like this will help.

Note that the only issue here is that the client sees 504
instead of 408. If these are real clients, you may want to use
larger client_body_timeout and rely on the ELB timeouts instead.

> Is there a way to actually send a 408 back to the client using nginx and
> ELBs?

No.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How to control the total requests in Ngnix

$
0
0
So what exactly are you trying to protect against?
Against “bad people” or “my website is busier than I think I can handle?”

Sent from my iPhone

> On Nov 30, 2017, at 6:52 AM, "tongshushan@migu.cn" <tongshushan@migu.cn> wrote:
>
> a limit of two connections per address is just a example.
> What does 2000 requests mean? Is that per second? yes,it's QPS.
>
> 童树山
> 咪咕视讯科技有限公司 研发部
> Mobile:13818663262
> Telephone:021-51856688(81275)
> Email:tongshushan@migu.cn
>
> 发件人: Gary
> 发送时间: 2017-11-30 17:44
> 收件人: nginx
> 主题: Re: 回复: How to control the total requests in Ngnix
> I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users.
>
> The 10 per second rate is fine, and probably about as low as you should go..
>
> What does 2000 requests mean? Is that per second?
>
>
> From: tongshushan@migu.cn
> Sent: November 30, 2017 1:14 AM
> To: nginx@nginx.org
> Reply-to: nginx@nginx.org
> Subject: 回复: How to control the total requests in Ngnix
>
> Additional: the total requests will be sent from different client ips.
>
> Tong
>
> 发件人: tongshushan@migu.cn
> 发送时间: 2017-11-30 17:12
> 收件人: nginx
> 主题: How to control the total requests in Ngnix
> Hi guys,
>
> I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
> The below configs are only for per client ip,not for the total requests control.
> ##########method 1##########
>
> limit_conn_zone $binary_remote_addr zone=addr:10m;
> server {
> location /mylocation/ {
> limit_conn addr 2;
> proxy_pass http://my_server/mylocation/;
> proxy_set_header Host $host:$server_port;
> }
> }
>
> ##########method 2##########
>
> limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
> server {
> location /mylocation/ {
> limit_req zone=one burst=5 nodelay;
> proxy_pass http://my_server/mylocation/;
> proxy_set_header Host $host:$server_port;
> }
> }
>
>
>
> How can I do it?
>
>
> Tong
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: HAProxy - Nginx - Wordpress

$
0
0
I have more information.

Looking at the source code from the homepage, I have the following error messages:

Mixed Content: The page at 'https://mon.site.fr/wp-admin/install.php' was loaded over HTTPS, but requested an insecure stylesheet 'http://mon.site.fr/wp-includes/css/buttons.min.css?ver=4.9'. This request has been blocked; the content must be served over HTTPS.
install.php:9 Mixed Content: The page at 'https://mon.site.fr/wp-admin/install.php' was loaded over HTTPS, but requested an insecure stylesheet 'http://mon.site.fr/wp-admin/css/install.min.css?ver=4.9'. This request has been blocked; the content must be served over HTTPS.

I tried to add the following lines to the wordpress wp-config.php file, but that did not solve my problem.

if ($ _SERVER ['HTTP_X_FORWARDED_PROTO'] == 'https')

   $ _SERVER [ 'HTTPS'] = 'on';

ideas

Re: How to control the total requests in Ngnix

$
0
0
Here is a log of real life IP limiting with a 30 connection limit:
86.184.152.14 British Telecommunications PLC
8.37.235.199 Level 3 Communications Inc.
130.76.186.14 The Boeing Company

security.5.bz2:Nov 29 20:50:53 theranch kernel: ipfw: 5005 drop session type 40 86.184.152.14 58714 -> myip 80, 34 too many entries
security.6.bz2:Nov 29 16:01:31 theranch kernel: ipfw: 5005 drop session type 40 8.37.235.199 10363 -> myip 80, 42 too many entries
above repeated twice
security.8.bz2:Nov 29 06:39:15 theranch kernel: ipfw: 5005 drop session type 40 130.76.186.14 34056 -> myip 80, 31 too many entries
above repeated 18 times

I have an Alexa rating around 960,000. Hey, at least I made to the top one million websites. But my point is even with a limit of 30, I'm kicking out readers.

Look at the nature of the IPs. British Telecom is one of those huge ISPs where I guess different users are sharing the same IP. (Not sure.) Level 3 is the provider at many Starbucks, besides being a significant traffic carrier. Boeing has decent IP space, but maybe only a few IPs per facility. Who knows.

My point is if you set the limit at two, that is way too low.

The only real way to protect from DDOS is to use a commercial reverse proxy. I don't think limiting connection in Nginx (or in the firewall) will solve a real attack. It will probably stop some kid in his parents basement. But today you can rent DDOS attacks on the dark web.

If you really want to improve performance of your server, do severe IP filtering at the firewall. Limit the number of search engines that can read your site. Block major hosting companies and virtual private servers. There are no eyeballs there. Just VPNs (who can drop the VPN if they really want to read your site) and hackers. Easily half the internet traffic is bots.

Per some discussions on this list, it is best not to block using nginx, but rather use the firewall. Nginx parses the http request even if blocking the IP, so the CPU load isn't insignificant. As an alternative, you can use a reputation based blocking list. (I don't use one on web servers, just on email servers.)

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Re: How to control the total requests in Ngnix

$
0
0
my website is busier than I think I can handle



Tong

From: Peter Booth
Date: 2017-12-01 06:25
To: nginx
Subject: Re: How to control the total requests in Ngnix
So what exactly are you trying to protect against?
Against “bad people” or “my website is busier than I think I can handle?”

Sent from my iPhone

On Nov 30, 2017, at 6:52 AM, "tongshushan@migu.cn" <tongshushan@migu.cn> wrote:

a limit of two connections per address is just a example.
What does 2000 requests mean? Is that per second? yes,it's QPS.



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:tongshushan@migu.cn

发件人: Gary
发送时间: 2017-11-30 17:44
收件人: nginx
主题: Re: 回复: How to control the total requests in Ngnix
I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users.

The 10 per second rate is fine, and probably about as low as you should go.

What does 2000 requests mean? Is that per second?


From: tongshushan@migu.cn
Sent: November 30, 2017 1:14 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: 回复: How to control the total requests in Ngnix

Additional: the total requests will be sent from different client ips.



Tong

发件人: tongshushan@migu.cn
发送时间: 2017-11-30 17:12
收件人: nginx
主题: How to control the total requests in Ngnix
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Re: How to control the total requests in Ngnix

$
0
0
I configured as below:
limit_req_zone "all" zone=all:100m rate=2000r/s;
limit_req zone=all burst=100 nodelay;
but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:

2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"

Why excess: 101.000? I set it as 2000r/s ?



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:tongshushan@migu.cn

From: Francis Daly
Date: 2017-12-01 02:38
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan@migu.cn wrote:

Hi there,

> what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.

Any $variable might be different in different connections. Any fixed
string will not be.

So:

limit_conn_zone "all" zone=all...

for example.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How to control the total requests in Ngnix

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Re: How to control the total requests in Ngnix

$
0
0
I sent the test requests from one fron only 1 server.



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:tongshushan@migu.cn

From: Gary
Date: 2017-12-01 12:17
To: nginx
Subject: Re: How to control the total requests in Ngnix
I thought the rate is per IP address, not for whole server.

From: tongshushan@migu.cn
Sent: November 30, 2017 7:18 PM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Re: How to control the total requests in Ngnix

I configured as below:
limit_req_zone "all" zone=all:100m rate=2000r/s;
limit_req zone=all burst=100 nodelay;
but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:

2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"

Why excess: 101.000? I set it as 2000r/s ?



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:tongshushan@migu.cn

From: Francis Daly
Date: 2017-12-01 02:38
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan@migu.cn wrote:

Hi there,

> what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.

Any $variable might be different in different connections. Any fixed
string will not be.

So:

limit_conn_zone "all" zone=all...

for example.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Re: How to control the total requests in Ngnix

$
0
0
I sent the test requests from only one client.




童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:tongshushan@migu.cn

From: Gary
Date: 2017-12-01 12:17
To: nginx
Subject: Re: How to control the total requests in Ngnix
I thought the rate is per IP address, not for whole server.

From: tongshushan@migu.cn
Sent: November 30, 2017 7:18 PM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Re: How to control the total requests in Ngnix

I configured as below:
limit_req_zone "all" zone=all:100m rate=2000r/s;
limit_req zone=all burst=100 nodelay;
but when testing,I use tool to send the request at: Qps:486.1(not reach 2000) I got the many many 503 error,and the error info as below:

2017/12/01 11:08:29 [error] 26592#37196: *15466 limiting requests, excess: 101.000 by zone "all", client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/inventory/aquire?productId=product1 HTTP/1.1", host: "localhost"

Why excess: 101.000? I set it as 2000r/s ?



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:tongshushan@migu.cn

From: Francis Daly
Date: 2017-12-01 02:38
To: nginx
Subject: Re: Re: How to control the total requests in Ngnix
On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan@migu.cn wrote:

Hi there,

> what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.

Any $variable might be different in different connections. Any fixed
string will not be.

So:

limit_conn_zone "all" zone=all...

for example.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Юникс-сокет и fastcgi.

$
0
0
Более того, я замечал залипания в работе бэкенда: запрос приходит, передаётся на php и на этом всё. Воркер php выглядит как рабочий, но юзер получает 502 (через несколько секунд ожидания). После отключения проблема не наблюдалась. Такое поведение было хаотичным и понять, что именно влияло не было возможности.

Re: Multiple Cache Manager Processes or Threads

$
0
0
Thank you for your answer,
I am using an old version (1.8.1).
I will try to upgrade to 1.12 and check if it solve my problem.

lua code in log_by_lua_file not executed when the upstream server is down

$
0
0
the nginx.conf as below:

upstream my_server {
server localhost:8095;
keepalive 2000;
}

location /private/rush2purchase/ {
limit_conn addr 20;
proxy_pass http://my_server/private/rush2purchase/;
proxy_set_header Host $host:$server_port;
rewrite_by_lua_file D:/tmp/lua/draw_r.lua;
log_by_lua_file D:/tmp/lua/draw_decr.lua;
}

when I send request to http://localhost/private/rush2purchase/ ,it works fine the the stream server is up,
but when I shutdown the upstream server(port:8095),I find the code not executed in log_by_lua_file (draw_decr.lua).

info in nginx access.log:
127.0.0.1 - - [01/Dec/2017:21:03:20 +0800] "GET /private/rush2purchase/ HTTP/1.1" 504 558 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3236.0 Safari/537.36"

error message in nginx error.log:
2017/12/01 21:02:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http://[::1]:8095/private/rush2purchase/", host: "localhost"
2017/12/01 21:03:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http://127.0.0.1:8095/private/rush2purchase/", host: "localhost"

How to fix it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Юникс-сокет и fastcgi.

$
0
0
Влияло использование в php
http://php.net/manual/ru/function.fastcgi-finish-request.php
Если в upstream включить keepalive, и выставить в on
http://nginx.org/ru/docs/http/ngx_http_fastcgi_module.html#fastcgi_keep_conn
, то при использование fastcgi_finish_requestв скриптах ловится 502

1 декабря 2017 г., 10:13 пользователь skeletor <nginx-forum@forum.nginx.org>
написал:

> Более того, я замечал залипания в работе бэкенда: запрос приходит,
> передаётся на php и на этом всё. Воркер php выглядит как рабочий, но юзер
> получает 502 (через несколько секунд ожидания). После отключения проблема
> не
> наблюдалась. Такое поведение было хаотичным и понять, что именно влияло не
> было возможности.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?21,277593,277615#msg-277615
>
> _______________________________________________
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru
>
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>