Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Nginx reverse proxy

$
0
0
Hello,
I am tried to configure my nginx that is installed on Ubuntu 16.10
I have the following infrastructure

wan
|
nginx reverse proxy with domain ssl.example.com
|
web server http.example.com


the http.example.com is point to ssl.example.com ip address

when the client open http://http.example.com it should be redirect to https://http.example.com directly
the client will establish ssl with my reverse proxy ssl.example.com "I already have installed lets encrypt cert"
my reverse proxy should be request the http.example.com -with no ssl -

it's like a cloud flare.

so what the configuration I should be do it ?
also how I can load balance to too web server that is in back of reverse proxy.
thanks.

Multiple https website with IPv6

$
0
0
I am using nginx with multiples https with a single IPv4 and dedicated IPv6 for each domain.

The problem i'm having is i'm unable to redirect non www to www without conflicting with the vhosts.

Here my setup

[b]Default[/b]

[code]
server {
listen 80 default_server;
listen [::2]:80 default_server;
server_name localhost;
}
[/code]

[b]domain[/b]

[code]
server {
listen 80;
listen [::2]:80;
server_name domain.com www.domain.com;
return 301 https://www.domain.com$request_uri;
}

server {
listen 443 ssl http2;
listen [::2]:443 ssl http2;
server_name domain.com;
return 301 https://www.$server_name$request_uri;
}

server {
listen 443 default_server ssl http2;
listen [::2]:443 default_server ssl http2;
server_name www.domain.com;
}
[/code]

[b]domain 2[/b]

[code]
server {
listen 80;
listen [::3]:80;
server_name domain2.com www.domain2.com;
return 301 https://www.domain2.com$request_uri;
}

server {
listen 443 ssl http2;
listen [::3]:443 ssl http2;
server_name domain2.com;
return 301 https://www.$server_name$request_uri;
}

server {
listen 443 ssl http2;
listen [::3]:443 default_server ssl http2;
server_name www.domain2.com;
}
[/code]

So here's the problem

IPv4

https://www.domain.com ✔
https://domain.com ✔

http://www.domain.com ✔
http://domain.com ✔

https://www.domain2.com ✔
https://domain2.com ✗(NET::ERR_CERT_COMMON_NAME_INVALID - domain.com)

http://www.domain2.com ✔
http://domain2.com ✔

IPv6

https://www.domain.com ✔
https://domain.com ✔

http://www.domain.com ✔
http://domain.com ✔

https://www.domain2.com ✔
https://domain2.com ✔

http://www.domain2.com ✔
http://domain2.com ✔

In IPv4 domain (https://domain2.com) the certificate of domain.com is served.

What's wrong with my config? If work on IPv6 why not in IPv4 is in same config block?

Re: Multiple https website with IPv6

$
0
0
On Tue, Jan 02, 2018 at 01:40:20AM -0500, Kurogane wrote:

Hi there,

> I am using nginx with multiples https with a single IPv4 and dedicated IPv6
> for each domain.

Looking at your (edited) config...

> server {
> listen 443 ssl http2;
> server_name domain.com;
> return 301 https://www.$server_name$request_uri;
> }
>
> server {
> listen 443 default_server ssl http2;
> server_name www.domain.com;
> }

> server {
> listen 443 ssl http2;
> server_name domain2.com;
> return 301 https://www.$server_name$request_uri;
> }
>
> server {
> listen 443 ssl http2;
> server_name www.domain2.com;
> }

It looks to me like your question is "how do I run multiple https web
sites on a single IP address?".

If that is the case, then the modern answer is "use SNI".

http://nginx.org/en/docs/http/configuring_https_servers.html

> What's wrong with my config? If work on IPv6 why not in IPv4 is in same
> config block?

You have a dedicated IPv6 address. You have a shared IPv4 address.

It is not "IPv6 works, IPv4 fails"; it is "dedicated works, shared fails".

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: NGINX просто не принимает какой-то процент коннектов

$
0
0
Прямо мистика какая-то... Отложил решение на потом. Но в новогоднюю ночь все само собой заработало нормально с тех пор ни разу не сбойнуло.

Re: how do I run multiple https web sites on a single IP address

$
0
0
>It looks to me like your question is "how do I run multiple https web sites on a single IP address?".

>If that is the case, then the modern answer is "use SNI".

>http://nginx.org/en/docs/http/configuring_https_servers.html

I'm not sure what is your point here? nginx have built SNI a decade ago even CentOS have nginx updated version.

If my nginx not have enabled or not SNI support then why works with www?

Can you enlighten me what i do wrong or what is the "special" configuration to use SNI with shared IPv4 address.

Re: how do I run multiple https web sites on a single IP address

$
0
0
On Tuesday, 2 January 2018 19:27:07 MSK Kurogane wrote:
> >It looks to me like your question is "how do I run multiple https web sites
> on a single IP address?".
>
> >If that is the case, then the modern answer is "use SNI".
>
> >http://nginx.org/en/docs/http/configuring_https_servers.html
>
> I'm not sure what is your point here? nginx have built SNI a decade ago even
> CentOS have nginx updated version.
>
> If my nginx not have enabled or not SNI support then why works with www?
>
> Can you enlighten me what i do wrong or what is the "special" configuration
> to use SNI with shared IPv4 address.
>
[..]

Are you sure that a tool you're using to check supports SNI?

wbr, Valentin V. Bartenev

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Наследование директив установки заголовков

$
0
0
Ясно, решение политическое.
Благодарю за пояснение.

> Вам не нужно просматривая все уровни вложенности и в уме суммировать списки директив.

Эта необходимость, кстати, никуда не девается.
И если при обычном наследовании достаточно просмотреть 2-3 уровня (вложенные location'ы в расчёт не берём) и сложить, то при текущем выборочном — нужно просматривать те же уровни, только для того, чтобы проверить не указаны ли там уже директивы и, если это так, скопировать их.

Re: Наследование директив установки заголовков

$
0
0
02.01.2018, 19:43, "gz" <nginx-forum@forum.nginx.org>:
> Ясно, решение политическое.
> Благодарю за пояснение.
>
>>  Вам не нужно просматривая все уровни вложенности и в уме суммировать
>
> списки директив.
>
> Эта необходимость, кстати, никуда не девается.
> И если при обычном наследовании достаточно просмотреть 2-3 уровня (вложенные
> location'ы в расчёт не берём) и сложить, то при текущем выборочном — нужно
> просматривать те же уровни, только для того, чтобы проверить не указаны ли
> там уже директивы и, если это так, скопировать их.

Вместо копирования надо использовать include

>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?21,277956,277967#msg-277967
>
> _______________________________________________
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru

--
Regards,
Konstantin
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Too many workers conenction on udp stream

$
0
0
Hi guys,

I have an nginx proxying udp/streams to another proxy which handles the
connection to the backend.

The same proxy proxying the udp streams to another proxy is working ok.
But when it proxies it to the other one, it fills with the worker error. I
turned on debugging and what i see, is that nginx aint releasing the udp
connections...
I could use a hand as I cant get it to work.

in the first proxy i have:

server {
listen *:8330 udp;
proxy_responses 1;
proxy_pass second-proxy:8330;
error_log /var/log/nginx/8330udp.log debug;
}



in the second that is the main which receives from various proxies:

server {
listen *:8330 udp;
proxy_responses 1;
proxy_pass server:8302;
error_log /var/log/nginx/udp8330.log debug;
}


This same config in another "third" proxy for a differnet set of backends
works ok.



The main proxy for the working requests logs is like this, it is ending the
connections:

2018/01/02 17:08:01 [debug] 6158#6158: *13 recv: fd:70 183 of 16384
2018/01/02 17:08:01 [debug] 6158#6158: *13 write new buf t:1 f:0
00000000012A6D00, pos 00000000012A7270, size: 183 file: 0, size: 0
2018/01/02 17:08:01 [debug] 6158#6158: *13 stream write filter: l:1 f:1
s:183
2018/01/02 17:08:01 [debug] 6158#6158: *13 sendmsg: 183 of 183
2018/01/02 17:08:01 [debug] 6158#6158: *13 stream write filter
0000000000000000
2018/01/02 17:08:01 [info] 6158#6158: *13 udp upstream disconnected, bytes
from/to client:122/183, bytes from/to upstream:183/122
2018/01/02 17:08:01 [debug] 6158#6158: *13 finalize stream proxy: 200
2018/01/02 17:08:01 [debug] 6158#6158: *13 free rr peer 1 0
2018/01/02 17:08:01 [debug] 6158#6158: *13 close stream proxy upstream
connection: 70
2018/01/02 17:08:01 [debug] 6158#6158: *13 reusable connection: 0
2018/01/02 17:08:01 [debug] 6158#6158: *13 finalize stream session: 200
2018/01/02 17:08:01 [debug] 6158#6158: *13 stream log handler
2018/01/02 17:08:01 [debug] 6158#6158: *13 close stream connection: 41
2018/01/02 17:08:01 [debug] 6158#6158: *13 event timer del: 41:
1514913481260
2018/01/02 17:08:01 [debug] 6158#6158: *13 reusable connection: 0
2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A7270
2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A70D0
2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 0000000001199550, unused: 0
2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A6C90, unused: 0
2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A6DA0, unused: 0
2018/01/02 17:08:01 [debug] 6158#6158: *13 free: 00000000012A6EB0, unused:
24



The same proxy for the other non working one is: NO finalize, nor closing
connection
2018/01/02 17:06:30 [debug] 6101#6101: *291 recvmsg: 52.200.231.253:13129
fd:51 n:313
2018/01/02 17:06:30 [info] 6101#6101: *291 udp client 52.200.231.253:13129
connected to 0.0.0.0:8330
2018/01/02 17:06:30 [debug] 6101#6101: *291 posix_memalign:
00000000025DE410:256 @16
2018/01/02 17:06:30 [debug] 6101#6101: *291 posix_memalign:
00000000025DE520:256 @16
2018/01/02 17:06:30 [debug] 6101#6101: *291 generic phase: 0
2018/01/02 17:06:30 [debug] 6101#6101: *291 generic phase: 1
2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF
9405CE34
2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF
12282534
2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 0000FFFF
00000D0A
2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF
2E952734
2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF
368A1934
2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF
AEEB2C34
2018/01/02 17:06:30 [debug] 6101#6101: *291 access: FDE7C834 FFFFFFFF
FDE7C834
2018/01/02 17:06:30 [debug] 6101#6101: *291 generic phase: 2
2018/01/02 17:06:30 [debug] 6101#6101: *291 proxy connection handler
2018/01/02 17:06:30 [debug] 6101#6101: *291 malloc: 00000000025DE630:400
2018/01/02 17:06:30 [debug] 6101#6101: *291 get rr peer, try: 1
2018/01/02 17:06:30 [debug] 6101#6101: *291 dgram socket 87
2018/01/02 17:06:30 [debug] 6101#6101: *291 epoll add connection: fd:87
ev:80002005
2018/01/02 17:06:30 [debug] 6101#6101: *291 connect to 52.44.235.174:8330,
fd:87 #292
2018/01/02 17:06:30 [debug] 6101#6101: *291 connected
2018/01/02 17:06:30 [debug] 6101#6101: *291 proxy connect: 0
2018/01/02 17:06:30 [info] 6101#6101: *291 udp proxy 10.13.11.74:48173
connected to 52.44.235.174:8330
2018/01/02 17:06:30 [debug] 6101#6101: *291 malloc: 00000000025DE7D0:16384
2018/01/02 17:06:30 [debug] 6101#6101: *291 stream proxy add preread
buffer: 313
2018/01/02 17:06:30 [debug] 6101#6101: *291 posix_memalign:
00000000025E27E0:256 @16
2018/01/02 17:06:30 [debug] 6101#6101: *291 write new buf t:1 f:0
00000000025DE2C0, pos 00000000025DE2C0, size: 313 file: 0, size: 0
2018/01/02 17:06:30 [debug] 6101#6101: *291 stream write filter: l:1 f:1
s:313
2018/01/02 17:06:30 [debug] 6101#6101: *291 sendmsg: 313 of 313
2018/01/02 17:06:30 [debug] 6101#6101: *291 stream write filter
0000000000000000
2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer add: 51:
600000:1514913390811
2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer: 51, old:
1514913390811, new: 1514913390811
2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer: 51, old:
1514913390811, new: 1514913390811
2018/01/02 17:06:31 [debug] 6101#6101: recvmsg on 0.0.0.0:8330, ready: 0
2018/01/02 17:06:31 [debug] 6101#6101: posix_memalign: 00000000025EF740:256
@16
2018/01/02 17:06:31 [debug] 6101#6101: posix_memalign: 00000000025EF850:256
@16
2018/01/02 17:06:31 [debug] 6101#6101: malloc: 00000000025EF960:313
2018/01/02 17:06:31 [debug] 6101#6101: *297 recvmsg: 52.20.21.23:13129
fd:51 n:313
2018/01/02 17:06:31 [info] 6101#6101: *297 udp client 52.20.21.23:13129
connected to 0.0.0.0:8330


Both same nginx version.

Thanks!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: how do I run multiple https web sites on a single IP address

$
0
0
On Tue, Jan 02, 2018 at 11:27:07AM -0500, Kurogane wrote:

Hi there,

> >http://nginx.org/en/docs/http/configuring_https_servers.html
>
> I'm not sure what is your point here? nginx have built SNI a decade ago even
> CentOS have nginx updated version.
>
> If my nginx not have enabled or not SNI support then why works with www?

Ah, sorry - I had missed that https://www.domain.com, https://domain.com,
and https://www.domain2.com all worked ok on IPv4. It is only
https://domain2.com that presents an unwanted certificate.

(And it presents the certificate for domain.com, even though
www.domain.com is configured as the default_server.)

Do you have four separate ssl certificate files, each of which is valid
for a single server name?

Or do you have one ssl certificate file which is valid for multiple
server names?

> Can you enlighten me what i do wrong or what is the "special" configuration
> to use SNI with shared IPv4 address.

One guess - is there any chance that the contents of the ssl_certificate
file that applies in the domain2.com server{} block is actually the
domain.com certificate? (Probably not, because the IPv6 connection should
be using the same ssl_certificate, and no error was reported there.)

Other than that, I don't know. Can you provide a complete config and
test commands that someone else can use to recreate the problem?

Or, to rule out any strange IPv4/IPv6 interaction -- do you see the same
behaviour when you remove all of the IPv6 config?

Good luck with it,

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Наследование директив установки заголовков

$
0
0
> Вместо копирования надо использовать include

Для 3-5 общих и пары заголовков уровнем ниже, include — это перебор.

Proxy Protocol for IMAP and POP3

$
0
0
Hello all,


Currently we do load balancing for NGINX server that included in Zimbra as Proxy services with HAPROXY.


but as we see in nginx's log access file all incoming source IP was logged as HaProxy's IP,

the question in how we configure nginx to show client's origin IP instead of HaProx's for IMAP and POP3 (mail) ?

Note: there is solution using Proxy Protocol (https://www.nginx.com/resources/admin-guide/proxy-protocol/) but it's available for http and stream only.

Re: Proxy Protocol for IMAP and POP3

$
0
0
Hello,

you could use "set_real_ip_from 'IP from LB';"

http://nginx.org/en/docs/http/ngx_http_realip_module.html

--
Alexander Naumann

----- Ursprüngliche Mail -----
Von: "idfariz" <nginx-forum@forum.nginx.org>
An: nginx@nginx.org
Gesendet: Mittwoch, 3. Januar 2018 05:58:35
Betreff: Proxy Protocol for IMAP and POP3

Hello all,


Currently we do load balancing for NGINX server that included in Zimbra as
Proxy services with HAPROXY.


but as we see in nginx's log access file all incoming source IP was logged
as HaProxy's IP,

the question in how we configure nginx to show client's origin IP instead of
HaProx's for IMAP and POP3 (mail) ?

Note: there is solution using Proxy Protocol
(https://www.nginx.com/resources/admin-guide/proxy-protocol/) but it's
available for http and stream only.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277972,277972#msg-277972

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Regular expression length syntax not working?

$
0
0
For those who are looking for the answer:

> A regular expression containing the characters “{” and “}” should be quoted.

So, his location directive:
location ~ ^/event/[0-9,A-Z]{16}/info$ {
proxy_pass http://localhost:7777;
}

Should look like this in order to work:
location ~ "^/event/[0-9,A-Z]{16}/info$" {
proxy_pass http://localhost:7777;
}

Static files slooooooow

$
0
0
Hi,
I have recently moved my site from shared hosting to VDS with Nginx. The performance of every page that does not contain heavy elements is very obvious, pages do load much faster. However, there is something wrong with the static files. Starting with the ~170k font file: it takes few seconds for the font to "apply" when I visit the site in a fresh anonymous tab. And it is way more horrible with bigger files: pdf files take ages to load.

This Pingdom report ( https://tools.pingdom.com/#!/dWuIkE/https://www.bykasov.com/2016/oda-sobakam-severa ) shows that there are several attempts to access the pdf file – ?

While on shared, the average text page load was slower, loading these static files would take far less time (even on pages with several pdf's at once, like category pages).

Apparently there is something wrong with my configuration and I would appreciate any help.

My nginx.conf:

# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

include /etc/nginx/mime.types;
default_type application/octet-stream;

server_names_hash_bucket_size 64;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
charset utf-8;

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;

# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
include /etc/nginx/hhvm.conf;

location / {
}

error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2 default_server;
# listen [::]:443 ssl http2 default_server;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# location / {
# }
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types image/svg+xml text/plain text/xml text/css text/javascript application/xml application/xhtml+xml application/rss+xml application/javascript application/x-javascript application/x-font-ttf application/vnd.ms-fontobject font/opentype font/ttf font/eot font/otf;

}


My site conf file:

server {
listen 80;
server_name bykasov.com www.bykasov.com;
return 301 https://www.bykasov.com$request_uri;
}

server {
listen 443 ssl http2;
server_name bykasov.com www.bykasov.com;

ssl_certificate /etc/letsencrypt/live/bykasov.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bykasov.com/privkey.pem;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;

ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
access_log (.....removed....);

# The rest of your server block
root (....removed....);
index index.php index.html index.htm;

directio 300k;
#output_buffers 2 1M;

#sendfile on;
#sendfile_max_chunk 256k;

location ^~ /.well-known/acme-challenge/ {
}

location / {
try_files $uri $uri/ /index.php?$args;
}

error_page 404 /404.html;
location = /50x.html {
root /(...removed....);
}

location ~* /wp-includes/.*.php$ {
deny all;
access_log off;
log_not_found off;
}

location ~* /wp-content/.*.php$ {
deny all;
access_log off;
log_not_found off;
}

location ~ ^/(wp-config\.php) {
deny all;
access_log off;
log_not_found off;
}

location ~ ^/(wp-login\.php) {
# allow (.....removed.....);
deny all;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_pass unix:/var/run/hhvm/hhvm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}


location ~ \.(js|css|png|jpg|jpeg|gif|ico|html|woff|woff2|ttf|svg|eot|otf)$ {
add_header "Access-Control-Allow-Origin" "*";
expires 1M;
access_log off;
add_header Cache-Control "public";
}

}



The directio-output buffers-sendfile part is something that I've tried but could not see it making any difference.

[error] access forbidden by rule

$
0
0
Здравствуйте, All!

С Новым Годом!

В конфиге:

server {
server_name debug.example.com;
allow 11.11.11.11;
deny all;
}

В access.log есть запись о том, что клиенту 22.22.22.22
на его запрос к серверу debug.example.com был возвращен 403 статус.

Но в error.log nginx еще зачем-то полным-полно строчек такого вида:

... [error] ... access forbidden by rule, client: 22.22.22.22

То что доступ запрещен всем, кроме 11.11.11.11 - это правильно,
но почему nginx считает это поведение ошибкой и пишет об этом в логи?

Какие действия мне следует предпринять,
чтобы в error.log не было сообщений о таких "ошибках"?

Может быть имеет смысл в nginx понизить приоритет сообщений
"access forbidden by rule" до debug или хотя бы до info?

Для limit_conn есть директива limit_conn_log_level,
для limit_req есть директива limit_req_log_level,
а вот для директивы deny ничего такого нет.

Или может быть имеет смысл сделать директиву deny_log_level
со значением по умолчанию error, для обратной совместимости?

--
Best regards,
Gena

_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

[PATCH] Chunked filter: check if ctx is null

$
0
0
There exists a path which brings you to the body filter in the chunked
filter module while the module ctx is null, which results in segfault.

If during piping chunked response from upstream to downstream both
upstream and downstream error happens, internal redirect to a named
location is performed (accoring to the directive error_page) and
module's contexts cleared. If you have a lua handler in that location,
it
starts sending a body, because headers was already sent. A crash in the
chunked filter module follows, because ctx is NULL.

Maybe there is also a problem in the lua module and it should call
header filters first. Also maybe nginx should not perform internal
redirect, if part of the body was already sent.

But better safe than sorry :) I found that the same checks are in body
filters of other core modules too.

---
nginx/src/http/modules/ngx_http_chunked_filter_module.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/nginx/src/http/modules/ngx_http_chunked_filter_module.c
b/nginx/src/http/modules/ngx_http_chunked_filter_module.c
index 4d6fd3eed..c3d173b20 100644
--- a/nginx/src/http/modules/ngx_http_chunked_filter_module.c
+++ b/nginx/src/http/modules/ngx_http_chunked_filter_module.c
@@ -116,6 +116,9 @@ ngx_http_chunked_body_filter(ngx_http_request_t *r,
ngx_chain_t *in)
}

ctx = ngx_http_get_module_ctx(r, ngx_http_chunked_filter_module);
+ if (ctx == NULL) {
+ return ngx_http_next_body_filter(r, in);
+ }

out = NULL;
ll = &out;
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: how do I run multiple https web sites on a single IP address

$
0
0
>Are you sure that a tool you're using to check supports SNI?

>wbr, Valentin V. Bartenev

What tool you're talking about? this error show in browser.

>Do you have four separate ssl certificate files, each of which is valid
>for a single server name?

>Or do you have one ssl certificate file which is valid for multiple
server names?

I'm not sure why you mean but i have two cert files. Each cert have a valid common name to use non www and www

>One guess - is there any chance that the contents of the ssl_certificate
f>ile that applies in the domain2.com server{} block is actually the
>domain.com certificate? (Probably not, because the IPv6 connection should
>be using the same ssl_certificate, and no error was reported there.)

domain2.com is just a block only do redirect that all. Is what i put in initial thread.

server {
listen 443 ssl http2;
listen [::3]:443 ssl http2;
server_name domain2.com;
return 301 https://www.$server_name$request_uri;
}

This is the full config of this block.

>Or, to rule out any strange IPv4/IPv6 interaction -- do you see the same
>behaviour when you remove all of the IPv6 config?

Same problem with or without IPv6.

I just notice when i disable IPv6 and only access via IPv4 do something wierd.

When i visit https://domain2.com i got the same error (domain.com certificate) and chrome or whatever browser say me if i want to continue and when i click to continue redirect me to www.domain2.com (is what i want to do and work with domain.com and domain2.com with IPv6). I'm not sure why first check domain.com and then use domain2.com server block.

Re: how do I run multiple https web sites on a single IP address

$
0
0
On Wed, Jan 03, 2018 at 02:23:32PM -0500, Kurogane wrote:

Hi there,

> >Are you sure that a tool you're using to check supports SNI?
>
> What tool you're talking about? this error show in browser.

In this case, the tool is "the browser". Which browser, which version?

The aim here is to allow someone who is not you to see the problem that
you are seeing.

Often, it is useful to use a low-level tool which hides nothing. So,
for example, you might be able to test with

openssl s_client -servername domain.com -connect 127.0.0.1:443

to see what certificate is returned; then repeat the test with
"domain2.com" and "www.domain2.com".

(You could also probably use something like

curl -k -v --resolve domain.com:443:127.0.0.1 https://domain.com

to see the same information, along with the http request and response.)

> >Do you have four separate ssl certificate files, each of which is valid
> >for a single server name?
>
> >Or do you have one ssl certificate file which is valid for multiple
> server names?
>
> I'm not sure why you mean but i have two cert files. Each cert have a valid
> common name to use non www and www

What does that mean, specifically?

If you do something like

openssl x509 -noout -text < your-domain.com-cert

do you see

Subject: CN=www.domain.com

and

X509v3 Subject Alternative Name: DNS:domain.com

or do you see something else? Same question, for your-domain2.com-cert.



In your nginx config, what "ssl_certificate" lines do you have?

You did not show any inside the server{} blocks; perhaps you have them
inside the http{} block?

The aim here is to allow someone to create an nginx instance which
resembles yours, and which shows the problem, or which does not show
the problem.

The problem that you report should not be happening.

If someone else can re-create it, perhaps there is a bug in nginx (that
has not been reported previously) that can be fixed. If no-one else can
re-create it, perhaps there is something unusual about your configuration
and set-up.

Only you know what your configuration is.

If you provide enough information to allow someone else get a similar
configuration, then maybe they will be able to see the cause of the
problem.

Can you show a complete, but minimum, configuration that still shows
the problem?

> server {
> listen 443 ssl http2;
> listen [::3]:443 ssl http2;
> server_name domain2.com;
> return 301 https://www.$server_name$request_uri;
> }
>
> This is the full config of this block.

Which ssl_certificate file do you want nginx to use when a request for
this server_name comes in?

How does nginx know that you want nginx to use that ssl_certificate?

> Same problem with or without IPv6.

Ok, that's good to know.

Your example config can now remove all of the IPv6 lines.

Perhaps it can also remove the "http2" parts, to make it even easier
for someone else to build a similar configuration.

> I just notice when i disable IPv6 and only access via IPv4 do something
> wierd.
>
> When i visit https://domain2.com i got the same error (domain.com
> certificate) and chrome or whatever browser say me if i want to continue and
> when i click to continue redirect me to www.domain2.com (is what i want to
> do and work with domain.com and domain2.com with IPv6). I'm not sure why
> first check domain.com and then use domain2.com server block.

That sounds to me like it is exactly the same as what happened when IPv6
was enabled.

Is it different?

If so, that is interesting information. Maybe there is some IPv4/IPv6
interaction involved.

Good luck with it,

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Too many workers conenction on udp stream

$
0
0
Hello!

On Tue, Jan 02, 2018 at 02:42:53PM -0300, Agus wrote:

> Hi guys,
>
> I have an nginx proxying udp/streams to another proxy which handles the
> connection to the backend.
>
> The same proxy proxying the udp streams to another proxy is working ok.
> But when it proxies it to the other one, it fills with the worker error. I
> turned on debugging and what i see, is that nginx aint releasing the udp
> connections...
> I could use a hand as I cant get it to work.
>
> in the first proxy i have:
>
> server {
> listen *:8330 udp;
> proxy_responses 1;
> proxy_pass second-proxy:8330;
> error_log /var/log/nginx/8330udp.log debug;
> }
>
>
>
> in the second that is the main which receives from various proxies:
>
> server {
> listen *:8330 udp;
> proxy_responses 1;
> proxy_pass server:8302;
> error_log /var/log/nginx/udp8330.log debug;
> }
>
>
> This same config in another "third" proxy for a differnet set of backends
> works ok.
>
>
>
> The main proxy for the working requests logs is like this, it is ending the
> connections:
>
> 2018/01/02 17:08:01 [debug] 6158#6158: *13 recv: fd:70 183 of 16384
> 2018/01/02 17:08:01 [debug] 6158#6158: *13 write new buf t:1 f:0
> 00000000012A6D00, pos 00000000012A7270, size: 183 file: 0, size: 0
> 2018/01/02 17:08:01 [debug] 6158#6158: *13 stream write filter: l:1 f:1
> s:183
> 2018/01/02 17:08:01 [debug] 6158#6158: *13 sendmsg: 183 of 183
> 2018/01/02 17:08:01 [debug] 6158#6158: *13 stream write filter
> 0000000000000000
> 2018/01/02 17:08:01 [info] 6158#6158: *13 udp upstream disconnected, bytes
> from/to client:122/183, bytes from/to upstream:183/122

Here nginx got an UDP response, and based on "proxy_responses 1"
in your configuration closes the session.

[...]

> The same proxy for the other non working one is: NO finalize, nor closing
> connection
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 recvmsg: 52.200.231.253:13129
> fd:51 n:313
> 2018/01/02 17:06:30 [info] 6101#6101: *291 udp client 52.200.231.253:13129
> connected to 0.0.0.0:8330

[...]

> 2018/01/02 17:06:30 [info] 6101#6101: *291 udp proxy 10.13.11.74:48173
> connected to 52.44.235.174:8330
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 malloc: 00000000025DE7D0:16384
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream proxy add preread
> buffer: 313
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 posix_memalign:
> 00000000025E27E0:256 @16
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 write new buf t:1 f:0
> 00000000025DE2C0, pos 00000000025DE2C0, size: 313 file: 0, size: 0
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream write filter: l:1 f:1
> s:313
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 sendmsg: 313 of 313
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 stream write filter
> 0000000000000000
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer add: 51:
> 600000:1514913390811
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer: 51, old:
> 1514913390811, new: 1514913390811
> 2018/01/02 17:06:30 [debug] 6101#6101: *291 event timer: 51, old:
> 1514913390811, new: 1514913390811

Here nginx got a new UDP client, sent the packet to the upstream
server and started to wait for a response. Once the response is
received, nginx will close the session much like in the above
case.

How long nginx will wait for a response can be controlled using
the "proxy_timeout" directive"

http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout

In your configuration it seems to be set to 600 seconds, which is
10 times longer than the default. If you want nginx to better
tolerate non-responding UDP backends, you may want to configure
shorter timeouts instead. Alternatively, consider configuring
more worker connections.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>