Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

HAProxy - Nginx - Wordpress

$
0
0
Hi,

I am replacing httpd by Nginx on my platform (httpd, Nginx and wordpress), but I have a problem blocking.
 
My architecture is as follows:

INTERNET --------https------> HAPROXY (SSL) -------> http ------> NGinx -------> Wordpress.

I have installed / configured haproxy and nginx. Both work and my site is in HTTPS. SSL is managed by HAProxy and Nginx does not do SSL.

Now I downloaded and unzipped wordpress. To follow the installation, I went to the homepage from an internet browser and problems begin.

On the homepage for installing wordpress, CSS and java scripts are not loaded whereas the same architecture works with httpd instead of NGinx.

I think the problem come from NGinx (an option to position ???).

any idea?

My configurations:

#####HAProxy
frontend https-in
bind X.X.X.X:443 ssl crt /etc/pki/certs
mode http
option httplog

acl my_site hdr(host) -i mon.site.fr
use_backend wp if my_site

rspadd Strict-Transport-Security:\ max-age=15768000

backend wp
mode http
option http-server-close
option forwardfor
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server wp_1 X.X.X.X:8080

#####NGinx
server {

listen *:8080;
server_name mon.site.fr;

root /var/www/html/site1;

access_log /var/log/nginx/site1.access.log;
error_log /var/log/nginx/site1.error.log;

location / {

index index.php index.html;
try_files $uri $uri/ /index.php?$args;
}

if (!-e $request_filename) {
rewrite ^.*$ /index.php last;
}

location = /favicon.ico {
log_not_found off;
access_log off;
}

location = /robots.txt {
log_not_found off;
access_log off;
allow all;
}

location ~ /\. {
deny all;
}

location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}

location ~* \.(html|css|js|png|jpg|jpeg|gif|ico|svg|eot|woff|ttf)$ {
expires max;
log_not_found off;
}

location ~ \.php$ {

try_files $uri =404;
fastcgi_pass unix:/var/run/php70-fpm.mon.site.fr.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;

include fastcgi_params;
}

}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

include /etc/nginx/sites-enabled/*.conf;

# Real IP
set_real_ip_from X.X.X.X;
real_ip_header X-Forwarded-For;

gzip on;
gzip_disable "msie6";

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;

}

Thanx

Overridable header values (with map?)

$
0
0
We're using nginx for several different types of servers, but we're trying
to unify the configuration to minimize shared code. One stumbling block is
headers. For most requests, we want to add a set of standard headers:

# headers.conf:

add_header Cache-Control $cache_control;
add_header X-Robots-Tag $robots_tag always;
add_header X-Frame-Options $frame_options;

add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options nosniff;
# several more...

Many of the headers are the same for all requests, but the first three are
tweaked for specific resources or target servers.

The first approach I took was to define two files:

# header-vars.conf:

# Values for the $cache_control header. By default, we use $one_day.
set $no_cache "max-age=0, no-store, no-cache, must-revalidate";
set $one_day "public, max-age=86400";
set $one_year "public, max-age=31536000";
set $cache_control $one_day;

# To allow robots, override this variable using `set $robots_tag all;`.
set $robots_tag "noindex, nofollow, nosnippet, noarchive";
set $frame_options "SAMEORIGIN";


....and the headers.conf above. Then, at appropriate contexts (either a
server or location block), different servers would include the files as
follows:

include header-vars.conf;
include headers.conf;

That would give them all of our defaults. If the specific application or
context needs to tweak the caching and robots, it might do something like
this:

include header-vars.conf;
set $cache_control $no_cache;
set $robots_tag all;
include headers.conf;


This was fine, but I recently came across an interesting use of map
https://serverfault.com/a/598106/405305 that I thought I could generalize
to simplify this pattern. My idea was to do something like:

# header-vars.conf:

map $robots $robots_tag {

# Disallowed
default "noindex, nofollow, nosnippet, noarchive";
off "noindex, nofollow, nosnippet, noarchive";

# Allowed
on all;
}

map $frames $frame_options {

# Allow in frames only on from the same origin (URL).
default "SAMEORIGIN";

# This isn't a real value, but it will cause the header to be ignored.
allow "ALLOW";
}

map $cache $cache_control {

# no caching
off "max-age=0, no-store, no-cache, must-revalidate";

# one day
default "public, max-age=86400";
1d "public, max-age=86400";

# one year
1y "public, max-age=31536000";
}


I thought this would allow me to include both header-vars.conf and
headers.conf in the http block. Then, within the server or location blocks,
I wouldn't have to do anything to get the defaults. Or, to tweak robots and
caching:

set $cache off;
set $robots on;

Since the variables wouldn't be evaluated till the headers were actually
added, I thought this would work well and simplify things a lot.
Unfortunately, I was mistaken that I would be able to use an undefined
variable in the first position of a map directive (I thought it would just
be empty):

unknown "robots" variable

Of course, I can't set a default value for that variable since I'm
including header-vars.conf at the http level. I'd rather not need to
include defaults in every server (there are many).

Does anyone have any suggestions for how I can better solve this problem?

Thanks!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Migrating from Varnish

$
0
0
Would it be possible to use the Redis module to track cache? For example, I
would like to log each "new" cache hit, and include the URL, cache
expiration time, and possibly the file it's stored in?

On Nov 23, 2017 23:51, "itpp2012" <nginx-forum@forum.nginx.org> wrote:

> Andrei Wrote:
> -------------------------------------------------------
> > Thanks for the tip. Have you ran into any issues as Maxim mentioned?
> >
>
> Not yet.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?2,277462,277487#msg-277487
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Set Expires Header only if upstream has not already set an Expires

$
0
0
Hello francis,

> > Howto set expires only if upstream does not have set an expires?

> * Francis Daly <francis@daoine.org> [2017-11-23 00:26]:

> You can set a value based on $upstream_http_expires --

> { default off; "" 7d; }

> in the appropriate "map" should set your Expires time to 7 days from
> now if there is not an Expires: header from the upstream.

thanks a lot. That solved my problem. I used the same:

map $upstream_http_expires $expires {
default off;
"" 7d;
}

server {
....
expires $expires;
}

Works like a charm. Thank you again for solving my problem. I thought about
using a map but missed the 'off' possibility and its behaviour.

Cheers,
Thomas
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Conditional Logging on more than one constraint

$
0
0
Hi,

So the conditional logging example on nginx website is this, which I've tried, and it works as advertised.

-----------
map $status $loggable {
~^[23] 0;
default 1;
}

access_log /path/to/access.log combined if=$loggable;[/quote]
-----------

What happens if I also want to stop logging for clients with the User Agent "Zabbix"? If I try this ...

-----------------
map $status $loggable {
~^[23] 0;
default 1;
}

map $http_user_agent $loggable {
Zabbix 0;
default 1;
}

access_log /path/to/access.log combined if=$loggable;
---------------

... then the first map is superceded by the second, i.e. the $status is ignored and only the $http_user_agent is used to decide what is logged.

Whats the trick for setting $loggable=0 for both cases?

Re: Conditional Logging on more than one constraint

$
0
0
Use a third Map which does an OR or an AND on other map variables in to a new single variable for use.

map $http_user_agent $logv1 {
...
}

map $http_user_agent $logv2 {
...
}

map $http_user_agent $logv3 {
$logv1 0;
$logv2 1;
}

несколько независимых экземпляров nginx'а

$
0
0
On 29.11.2017 20:47, Maxim Dounin wrote:

> например - если на машине
> запускается несколько независимых экземпляров nginx'а. Скажем, в
> портах FreeBSD такое поддерживается из коробки штатными
> rc-скриптами.

Кстати, в Linux это тоже поддерживается из коробки.

Но наверное такая возможность в Linux мало кому нужна,
раз она до сих пор не появилась в официальных сборках nginx для Linux?

/etc/systemd/system/nginx@.service

[Unit]
Description=nginx %I
Documentation=http://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/var/run/nginx-%i.pid
ExecStart=/usr/sbin/nginx -c /etc/nginx/%i.conf -g 'pid
/var/run/nginx-%i.pid;'
ExecReload=/bin/kill -s HUP $MAINPID

[Install]
WantedBy=multi-user.target

====================================================================

/etc/nginx/static.conf

events {
worker_connections 1024;
}

http {
server {
listen 8001;
return 200 "static\n";
}
}

====================================================================

/etc/nginx/dynamic.conf

events {
worker_connections 1024;
}

http {
server {
listen 8002;
return 200 "dynamic\n";
}
}

====================================================================

# systemctl daemon-reload
# systemctl start nginx
# systemctl start nginx@static
# systemctl start nginx@dynamic

# curl localhost:8001
static

# curl localhost:8002
dynamic

# ls -1 /var/run/nginx*
/var/run/nginx-dynamic.pid
/var/run/nginx-static.pid
/var/run/nginx.pid

--
Best regards,
Gena

_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

How to control the total requests in Ngnix

$
0
0
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

回复: How to control the total requests in Ngnix

$
0
0
Additional: the total requests will be sent from different client ips.



Tong

发件人: tongshushan@migu.cn
发送时间: 2017-11-30 17:12
收件人: nginx
主题: How to control the total requests in Ngnix
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 回复: How to control the total requests in Ngnix

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How to control the total requests in Ngnix

$
0
0
On Thu, Nov 30, 2017 at 05:12:18PM +0800, tongshushan@migu.cn wrote:

Hi there,

> I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
> The below configs are only for per client ip,not for the total requests control.

> ##########method 1##########
>
> limit_conn_zone $binary_remote_addr zone=addr:10m;

http://nginx.org/r/limit_conn_zone

If "key" is "$binary_remote_addr", it will be the same for the same
client ip, and different for different client ips; the limits apply to
each individual value of client ip (strictly: to each individual value of
"key").

If "key" is (for example) "fixed", it will be the same for every
connection, and so the limits will apply for all connections.

Note: that limits concurrent connections, not requests.

> ##########method 2##########
>
> limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;

http://nginx.org/r/limit_req_zone

Again, set "key" to something that is the same for all requests, and
the limit will apply to all requests.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Moving SSL termination to the edge increased the instance of 502 errors

$
0
0
Since the upstream now has changed tcp ports - do check if it is a
firewall/network buffer etc issue too on the new port.

On Wed, Nov 29, 2017 at 11:42 PM, Peter Booth <peter_booth@me.com> wrote:

> There are many things that *could* cause what you’re seeing - say at least
> eight. You might be lucky and guess the right one- but probably smarter to
> see exactly what the issue is.
>
> Presumably you changed your upstream webservers to do this work, replacing
> ssl with unencrypted connections? Do you have sar data showing #tcp
> connections before and after the change? Perhaps every request is
> negotiating SSL now?
> What if you add another nginx instance that doesn’t use ssl at all (just
> as a test) - does that also have 502s?. You probably have data you need to
> isolate
>
> Sent from my iPhone
>
> > On Nov 29, 2017, at 8:05 AM, Michael Ottoson <michael.ottoson@cri.com>
> wrote:
> >
> > Thanks, Maxim.
> >
> > That makes a lot of sense. However, the problem started at exactly the
> same time we moved SSL termination. There were no changes to the
> application. It is unlikely to be a mere coincidence - but it could be.
> >
> > We were previously using HAPROXY for load balancing (well, the company
> we inherited this from did) and the same happened when they tried moving
> SSL termination.
> >
> > There is a reply to my question on serverfault, suggesting increasing
> keepalives (https://www.nginx.com/blog/load-balancing-with-nginx-
> plus-part2/#keepalive). This is because moving SSL increases the number
> of TCP connects. I'll give that a try and report back.
> >
> > -----Original Message-----
> > From: nginx [mailto:nginx-bounces@nginx.org] On Behalf Of Maxim Dounin
> > Sent: Wednesday, November 29, 2017 7:43 AM
> > To: nginx@nginx.org
> > Subject: Re: Moving SSL termination to the edge increased the instance
> of 502 errors
> >
> > Hello!
> >
> >> On Wed, Nov 29, 2017 at 04:27:37AM +0000, Michael Ottoson wrote:
> >>
> >> Hi All,
> >>
> >> We installed nginx as load balancer/failover in front of two upstream
> web servers.
> >>
> >> At first SSL terminated at the web servers and nginx was configured as
> TCP passthrough on 443.
> >>
> >> We rarely experiences 502s and when it did it was likely due to
> tuning/tweaking.
> >>
> >> About a week ago we moved SSL termination to the edge. Since then
> we've been getting daily 502s. A small percentage - never reaching 1%.
> But with ½ million requests per day, we are starting to get complaints.
> >>
> >> Stranger: the percentage seems to be rising.
> >>
> >> I have more details and a pretty picture here:
> >>
> >> https://serverfault.com/questions/885638/moving-ssl-termination-to-the
> >> -edge-increased-the-instance-of-502-errors
> >>
> >>
> >> Any advice how to squash those 502s? Should I be worried nginx is
> leaking?
> >
> > First of all, you have to find the reason for these 502 errors.
> > Looking into the error log is a good start.
> >
> > As per provided serverfault question, you see "no live upstreams"
> > errors in logs. These errors mean that all configured upstream servers
> were disabled due to previous errors (see http://nginx.org/en/docs/http/
> ngx_http_upstream_module.html#max_fails),
> > that is, these errors are just a result of previous errors. You have to
> find out real errors, they should be in the error log too.
> >
> > --
> > Maxim Dounin
> > http://mdounin.ru/
> > _______________________________________________
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> > _______________________________________________
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>



--
*Anoop P Alias*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Документация, директива pid

$
0
0
On 29.11.2017 20:47, Maxim Dounin wrote:

>> -<default>nginx.pid</default>
>> +<default>Зависит от параметров сборки nginx</default>
>> <context>main</context>

> Это необычайно информативно, и наверное всё, что можно тут
> сделать, это осознать, что некоторые идеи - просто плохие.

С моей точки зрения,
информация в документации должна быть прежде всего достоверной.

"Значение по умолчанию" - это то значение которое примет директива,
если она будет отсутствовать в конфиге или будет закомментирована.

Если в документации явно указано какое-то одно значение по умолчанию,
а на самом деле у директивы будет совсем другое значение по умолчанию,
то это, с моей точки зрения, - достаточно грубая ошибка в документации,
которую следует исправить тем или иным способом.

Идеальным вариантом, с моей точки зрения, было бы сделать так:

<default><link doc="default_values.xml">Defined at compile
time</link>.</default>

В виде html это выглядело бы так:

Default: Defined at compile time.

где текст "Defined at compile time" будет гиперссылкой.

И в документе default_values.html для всех 11 директив описать способ
как можно узнать их значение по умолчанию с помощью команды nginx -V

Но xmllint ругается, что
Element default was declared #PCDATA but contains non text nodes

>> +Значение по умолчанию задается в момент конфигурирования nginx
>> параметром <literal>configure --pid-path</literal>.
>> +Узнать значение по умолчанию для директивы <literal>pid</literal> можно
>> запустив команду <literal>nginx -V</literal>
>> +и посмотрев на значение параметра <literal>configure --pid-path</literal>.

> Это мало соответствует тому, что хотелось бы видеть в описании
> директивы.

Предложите лучший вариант как исправить эти ошибки в документации.

>> +<para>
>> +Не рекомендуется явно указывать значение директивы
>> <literal>pid</literal> в конфигурационном файле.
>> </para>
>
> Это не соответствует действительности. PID-файл можно и нужно
> задавать во многих ситуациях, например - если на машине
> запускается несколько независимых экземпляров nginx'а. Скажем, в
> портах FreeBSD такое поддерживается из коробки штатными
> rc-скриптами.

В портах FreeBSD используется аргумент командной строки
nginx -g \"pid ${pidfile};\":

https://github.com/freebsd/freebsd-ports/blob/master/www/nginx/files/nginx.in#L64

Если директиву pid указать и в командной строке и в конфиге,
тогда nginx будет ругаться при старте на такой конфиг:

nginx: [emerg] "pid" directive is duplicate in /etc/nginx/nginx.conf:2

Можно ли подробнее узнать про "многие ситуации",
когда необходимо задавать значение директивы pid в конфиге nginx?

С моей точки зрения, будет больше вреда чем пользы от явного определения
пользователями значения директивы pid в конфигурационном файле nginx.

Часть ошибок
systemd: PID file /var/run/nginx.pid not readable (yet?) after start.
происходит именно из-за того что в unit-файле одно значение PIDFile=
а в конфигурационном файле указано другое значение для директивы pid.

Другая часть ошибок
systemd: PID file /var/run/nginx.pid not readable (yet?) after start.
происходит из-за того, что nginx зачем-то не совместим с systemd.

Это Вы знаете, что сообщения эти "безвредные", да и то, только в том
случае если в PIDFile= и в директиве pid указано одинаковое значение.

Но пользователи nginx этого ведь не знают! И они продолжают безуспешно
искать решение как убрать эти сообщения об ошибках из лог-файлов.

Хотя проблема то легко решается на стороне nginx патчем в одну строчку.
http://mailman.nginx.org/pipermail/nginx-devel/2017-November/010658.html

--
Best regards,
Gena

_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Re: How to control the total requests in Ngnix

$
0
0
a limit of two connections per address is just a example.
What does 2000 requests mean? Is that per second? yes,it's QPS.



童树山
咪咕视讯科技有限公司 研发部
Mobile:13818663262
Telephone:021-51856688(81275)
Email:tongshushan@migu.cn

发件人: Gary
发送时间: 2017-11-30 17:44
收件人: nginx
主题: Re: 回复: How to control the total requests in Ngnix
I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users.

The 10 per second rate is fine, and probably about as low as you should go.

What does 2000 requests mean? Is that per second?


From: tongshushan@migu.cn
Sent: November 30, 2017 1:14 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: 回复: How to control the total requests in Ngnix

Additional: the total requests will be sent from different client ips.



Tong

发件人: tongshushan@migu.cn
发送时间: 2017-11-30 17:12
收件人: nginx
主题: How to control the total requests in Ngnix
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Re: How to control the total requests in Ngnix

$
0
0
Francis,
what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this.



Tong

From: Francis Daly
Date: 2017-11-30 18:17
To: nginx
Subject: Re: How to control the total requests in Ngnix
On Thu, Nov 30, 2017 at 05:12:18PM +0800, tongshushan@migu.cn wrote:

Hi there,

> I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
> The below configs are only for per client ip,not for the total requests control.

> ##########method 1##########
>
> limit_conn_zone $binary_remote_addr zone=addr:10m;

http://nginx.org/r/limit_conn_zone

If "key" is "$binary_remote_addr", it will be the same for the same
client ip, and different for different client ips; the limits apply to
each individual value of client ip (strictly: to each individual value of
"key").

If "key" is (for example) "fixed", it will be the same for every
connection, and so the limits will apply for all connections.

Note: that limits concurrent connections, not requests.

> ##########method 2##########
>
> limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;

http://nginx.org/r/limit_req_zone

Again, set "key" to something that is the same for all requests, and
the limit will apply to all requests.

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Юникс-сокет и fastcgi.

$
0
0
Здравствуйте!

Имеет ли смысл включать keepalive для подключения к php-fpm через юникс-сокет?

Попробовал сейчас включить, используя:

upstream e_php {
server unix:/run/php-fpm-e.socket;
keepalive 200;
}

location ~* \.php$ {
fastcgi_pass e_php;
fastcgi_keep_conn on;
}

Смотрю в статус пула php-fpm, там скорость изменения показателя "accepted conn" за
секунду (в среднем 500 cps) не изменяется по сравнению с конфигурацией

location ~* \.php$ {
fastcgi_pass unix:/run/php-fpm-e.socket;
}

Я что-то неправильно делаю\понимаю?

С уважением, Иван.
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Overridable header values (with map?)

$
0
0
Hello!

On Thu, Nov 30, 2017 at 12:28:24AM -0500, Brandon Mintern wrote:

[...]

> This was fine, but I recently came across an interesting use of map
> https://serverfault.com/a/598106/405305 that I thought I could generalize
> to simplify this pattern. My idea was to do something like:
>
> # header-vars.conf:
>
> map $robots $robots_tag {
>
> # Disallowed
> default "noindex, nofollow, nosnippet, noarchive";
> off "noindex, nofollow, nosnippet, noarchive";
>
> # Allowed
> on all;
> }

[...]

> Unfortunately, I was mistaken that I would be able to use an undefined
> variable in the first position of a map directive (I thought it would just
> be empty):
>
> unknown "robots" variable
>
> Of course, I can't set a default value for that variable since I'm
> including header-vars.conf at the http level. I'd rather not need to
> include defaults in every server (there are many).
>
> Does anyone have any suggestions for how I can better solve this problem?

The error in question will only appear if you don't have the
variable defined at all, that is, it is not used anywhere in your
configuration. Using it at least somewhere will resolve the
error. That is, just add something like

set $robots off;

anywhere in your configuration as appopriate (for example, in the
default server{} block).

Once you will be able to start nginx, you'll start getting
warnings when the variable is used uninitialized, e.g.:

.... [warn] ... using uninitialized "robots" variable ...

These warnings can be switched off using the
uninitialized_variable_warn directive, see
http://nginx.org/r/uninitialized_variable_warn.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Юникс-сокет и fastcgi.

$
0
0
On Thursday 30 November 2017 17:05:34 Иван wrote:
> Здравствуйте!
>
> Имеет ли смысл включать keepalive для подключения к php-fpm через юникс-сокет?
>
[..]

Особого смысла нет.

--
Валентин Бартенев
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Документация, директива pid

$
0
0
Hello!

On Thu, Nov 30, 2017 at 01:40:57PM +0200, Gena Makhomed wrote:

> On 29.11.2017 20:47, Maxim Dounin wrote:
>
> >> -<default>nginx.pid</default>
> >> +<default>Зависит от параметров сборки nginx</default>
> >> <context>main</context>
>
> > Это необычайно информативно, и наверное всё, что можно тут
> > сделать, это осознать, что некоторые идеи - просто плохие.
>
> С моей точки зрения,
> информация в документации должна быть прежде всего достоверной.
>
> "Значение по умолчанию" - это то значение которое примет директива,
> если она будет отсутствовать в конфиге или будет закомментирована.

Значение по умолчанию - это ещё и то значение, которое примет
директива, если будет отсутствовать соответствующий параметр
сборки.

В этом месте, кстати, косяк, который совсем просто исправить -
вместо "nginx.pid" должно быть "logs/nginx.pid". Патч:

# HG changeset patch
# User Maxim Dounin <mdounin@mdounin.ru>
# Date 1512060425 -10800
# Thu Nov 30 19:47:05 2017 +0300
# Node ID 8f885a69374ddf67ff9400c7892f020b88f41839
# Parent 05f5bfdaffa3b71299ae378a37c2902c0e6825f1
Fixed the "pid" directive default value.

diff --git a/xml/en/docs/ngx_core_module.xml b/xml/en/docs/ngx_core_module.xml
--- a/xml/en/docs/ngx_core_module.xml
+++ b/xml/en/docs/ngx_core_module.xml
@@ -10,7 +10,7 @@
<module name="Core functionality"
link="/en/docs/ngx_core_module.html"
lang="en"
- rev="24">
+ rev="25">

<section id="example" name="Example Configuration">

@@ -404,7 +404,7 @@ the JIT support is enabled via the

<directive name="pid">
<syntax><value>file</value></syntax>
-<default>nginx.pid</default>
+<default>logs/nginx.pid</default>
<context>main</context>

<para>
diff --git a/xml/ru/docs/ngx_core_module.xml b/xml/ru/docs/ngx_core_module.xml
--- a/xml/ru/docs/ngx_core_module.xml
+++ b/xml/ru/docs/ngx_core_module.xml
@@ -10,7 +10,7 @@
<module name="Основная функциональность"
link="/ru/docs/ngx_core_module.html"
lang="ru"
- rev="24">
+ rev="25">

<section id="example" name="Пример конфигурации">

@@ -402,7 +402,7 @@ load_module modules/ngx_mail_module.so;

<directive name="pid">
<syntax><value>файл</value></syntax>
-<default>nginx.pid</default>
+<default>logs/nginx.pid</default>
<context>main</context>

<para>

[...]

> >> +Значение по умолчанию задается в момент конфигурирования nginx
> >> параметром <literal>configure --pid-path</literal>.
> >> +Узнать значение по умолчанию для директивы <literal>pid</literal> можно
> >> запустив команду <literal>nginx -V</literal>
> >> +и посмотрев на значение параметра <literal>configure --pid-path</literal>.
>
> > Это мало соответствует тому, что хотелось бы видеть в описании
> > директивы.
>
> Предложите лучший вариант как исправить эти ошибки в документации.

Я предложил ещё в первом письме этого треда - сделать note. И в
нём написать, что значение по умолчанию может быть переопределено
с помощью соответствующего параметра configure. Собственно, я
тогда же попросил Ярослава, который у нас занимается
документацией, на это посмотреть, и заодно привести в порядок
configure.html.

(Отмечу в скобках, что, скажем, в документации Apache для
ServerRoot приблизительно так и сделано,
https://httpd.apache.org/docs/2.4/mod/core.html#serverroot. А вот
в описании PidFile не указано, что его значение по умолчанию
переопределяется с помощью "-D DEFAULT_PIDLOG=...". Хватит это
терпеть!)

> >> +<para>
> >> +Не рекомендуется явно указывать значение директивы
> >> <literal>pid</literal> в конфигурационном файле.
> >> </para>
> >
> > Это не соответствует действительности. PID-файл можно и нужно
> > задавать во многих ситуациях, например - если на машине
> > запускается несколько независимых экземпляров nginx'а. Скажем, в
> > портах FreeBSD такое поддерживается из коробки штатными
> > rc-скриптами.
>
> В портах FreeBSD используется аргумент командной строки
> nginx -g \"pid ${pidfile};\":
>
> https://github.com/freebsd/freebsd-ports/blob/master/www/nginx/files/nginx.in#L64
>
> Если директиву pid указать и в командной строке и в конфиге,
> тогда nginx будет ругаться при старте на такой конфиг:
>
> nginx: [emerg] "pid" directive is duplicate in /etc/nginx/nginx.conf:2
>
> Можно ли подробнее узнать про "многие ситуации",
> когда необходимо задавать значение директивы pid в конфиге nginx?
>
> С моей точки зрения, будет больше вреда чем пользы от явного определения
> пользователями значения директивы pid в конфигурационном файле nginx.

Если хочется вдаваться в семантические различия "-g" и собственно
конфигурационного файла, то pid именно в конфигурационном файле
обычно проще и удобнее указывать, если хочется запускать несколько
экземпляров nginx'а без использования скриптов, которые умеют
делать это через "-g". Или когда хочется запустить nginx,
стоящий в системе, под непривелигированным пользователем - для
тестов, например.

[...]

> Другая часть ошибок
> systemd: PID file /var/run/nginx.pid not readable (yet?) after start.
> происходит из-за того, что nginx зачем-то не совместим с systemd.
>
> Это Вы знаете, что сообщения эти "безвредные", да и то, только в том
> случае если в PIDFile= и в директиве pid указано одинаковое значение.
>
> Но пользователи nginx этого ведь не знают! И они продолжают безуспешно
> искать решение как убрать эти сообщения об ошибках из лог-файлов.
>
> Хотя проблема то легко решается на стороне nginx патчем в одну строчку.
> http://mailman.nginx.org/pipermail/nginx-devel/2017-November/010658.html

По этому вопросу я уже сказал всё, что считаю нужным, и не
планирую к нему возвращатья. Спасибо.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Multiple Cache Manager Processes or Threads

$
0
0
Hello,

I have an issue with the cache manager and the way I use it.
When I configure 2 different caches zones, one very huge and one very fast, the cache manager can't delete files quickly enough and lead to a partition full.

For example:
proxy_cache_path /mnt/hdd/cache levels=1:2:2 keys_zone=cache_hdd:40g max_size=40000g inactive=5d;
proxy_cache_path /mnt/ram/cache levels=1:2 keys_zone=cache_ram:300m max_size=300g inactive=1h;

On the beginning, ram cache is correctly purge around 40GB (+/- Input bandwidth*10sec) , but when the hdd cache begins to fill up, ram cache growing over 50GB. I think the cache manager is stuck by the slowness of the filesystem / hardware.

I can fix this by using 2 nginx on the same machine, one configured as ram cache, the other as hdd cache; but I wonder if it would be possible to create a cache manager process for each proxy_cache_path directive.

Thank in advance.
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>