Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Re: Nginx 1.6.2 - Redirect users base on 4 digits number provide

$
0
0
On Monday, October 06, 2014 12:24:35 PM mottycruz wrote:
> Thanks for your help Styopa,
>
> I was able to find modules installed on our current proxy with the following
> command, because we have a customize module.
>
> :~# /usr/local/nginx/sbin/nginx -V
> nginx version: nginx/0.7.67
> built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1)
> TLS SNI support enabled
> configure arguments: --prefix=/usr/local/nginx --with-http_ssl_module
> --add-module=/home/ngx_http_cust_app_version_routing
>
> I tried to redirect base on URL
>
> for instance I tried:
> Redirect ^/app2$ http://app2.server2.com;
>
> but does not seem to be working, I can't find much in the logs. do you have
> any suggestions?
>
> Thanks,
> -Motty
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,253708,253792#msg-253792
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

Hello Motty,

I'm a little bit confused by your question. If your goal is to serve different URLs by different backends, the config will look like this:

location = /app2 {
# this will strip "/app2" from the request to the backend
# e.g. user request of: /app2/index.do?foo=bar
# will be routed to app2 backend as:
# /index.do?foo=bar
proxy_pass http://app2.server2.com/;
}

If your goal is to return HTTP 301 permanent redirect, it will be:
location = /app2 {
return 301 $scheme://app2.server2.com/;
}

Please be sure to read the following info (it's pretty short actually):
http://nginx.org/r/location
http://nginx.org/r/proxy_pass

Unfortunately, I'm not familiar with 3rd-party modules, so I cannot advise on them.
--
Sincerely yours,
Styopa Semenukha.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Intro

Nginx with RTMP Server

$
0
0
Hi

I'm newbie here. I installed nginx on my windows(64bit). How to set up my own private RTMP server?

Re: [calling all patch XPerts !] [PATCH] RSA+DSA+ECC bundles

$
0
0
Updating patch for the last nginx isn't a problem - we need to hear from
Maxim what was the problem with old patch (it wasn't applied that time -
why should by applied a new one?) to fix it.

On Mon, Oct 6, 2014 at 10:25 PM, shmick@riseup.net <shmick@riseup.net>
wrote:

> calling all patch XPerts !
> calling all patch XPerts !
> is anybody out there able to update patch support for the latest nginx ?
>
> shmick@riseup.net wrote:
> > unfortunately this was as far as i got with version git
> >
> > $ patch -p0 < nginx_multiple_certs_and_stapling_V2.patch
> > patching file a/src/event/ngx_event_openssl.c
> > Hunk #1 succeeded at 96 with fuzz 2 (offset 12 lines).
> > Hunk #2 succeeded at 162 (offset 14 lines).
> > Hunk #3 FAILED at 191.
> > Hunk #4 FAILED at 236.
> > 2 out of 4 hunks FAILED -- saving rejects to file
> > a/src/event/ngx_event_openssl.c.rej
> > patching file a/src/event/ngx_event_openssl.h
> > Hunk #1 FAILED at 104.
> > Hunk #2 succeeded at 203 (offset 22 lines).
> > 1 out of 2 hunks FAILED -- saving rejects to file
> > a/src/event/ngx_event_openssl.h.rej
> > patching file a/src/event/ngx_event_openssl_stapling.c
> > Hunk #1 FAILED at 11.
> > Hunk #12 succeeded at 1793 (offset 13 lines).
> > 1 out of 12 hunks FAILED -- saving rejects to file
> > a/src/event/ngx_event_openssl_stapling.c.rej
> > patching file a/src/http/modules/ngx_http_ssl_module.c
> > Hunk #1 FAILED at 66.
> > Hunk #2 succeeded at 209 (offset 31 lines).
> > Hunk #3 FAILED at 404.
> > Hunk #4 FAILED at 463.
> > Hunk #5 FAILED at 550.
> > Hunk #6 succeeded at 702 (offset 110 lines).
> > Hunk #7 succeeded at 762 (offset 118 lines).
> > 4 out of 7 hunks FAILED -- saving rejects to file
> > a/src/http/modules/ngx_http_ssl_module.c.rej
> > patching file a/src/http/modules/ngx_http_ssl_module.h
> > Hunk #1 FAILED at 25.
> > 1 out of 1 hunk FAILED -- saving rejects to file
> > a/src/http/modules/ngx_http_ssl_module.h.rej
> > patching file a/src/mail/ngx_mail_ssl_module.c
> > Hunk #1 FAILED at 57.
> > Hunk #2 FAILED at 173.
> > Hunk #3 FAILED at 215.
> > Hunk #4 FAILED at 243.
> > 4 out of 4 hunks FAILED -- saving rejects to file
> > a/src/mail/ngx_mail_ssl_module.c.rej
> > patching file a/src/mail/ngx_mail_ssl_module.h
> > Hunk #1 FAILED at 27.
> > 1 out of 1 hunk FAILED -- saving rejects to file
> > a/src/mail/ngx_mail_ssl_module.h.rej
> >
> >
> > and this was as far as i got with version 1.6.2 just renaming dirs
> >
> > beyond that its all greek to me ...
> >
> >
> > $ patch -p0 < nginx_multiple_certs_and_stapling_V2.patch
> > patching file nginx-1.6.2/src/event/ngx_event_openssl.c
> > Hunk #1 succeeded at 86 with fuzz 2 (offset 2 lines).
> > Hunk #2 succeeded at 150 (offset 2 lines).
> > Hunk #3 FAILED at 191.
> > Hunk #4 succeeded at 240 (offset 4 lines).
> > 1 out of 4 hunks FAILED -- saving rejects to file
> > nginx-1.6.2/src/event/ngx_event_openssl.c.rej
> > patching file nginx-1.6.2/src/event/ngx_event_openssl.h
> > Hunk #1 succeeded at 108 (offset 4 lines).
> > Hunk #2 succeeded at 191 (offset 6 lines).
> > patching file nginx-1.6.2/src/event/ngx_event_openssl_stapling.c
> > Hunk #1 FAILED at 11.
> > Hunk #12 succeeded at 1791 (offset 11 lines).
> > 1 out of 12 hunks FAILED -- saving rejects to file
> > nginx-1.6.2/src/event/ngx_event_openssl_stapling.c.rej
> > patching file nginx-1.6.2/src/http/modules/ngx_http_ssl_module.c
> > Hunk #1 succeeded at 74 (offset 8 lines).
> > Hunk #2 succeeded at 200 (offset 22 lines).
> > Hunk #3 FAILED at 404.
> > Hunk #4 FAILED at 463.
> > Hunk #5 succeeded at 640 (offset 90 lines).
> > Hunk #6 succeeded at 677 (offset 92 lines).
> > Hunk #7 succeeded at 737 (offset 100 lines).
> > 2 out of 7 hunks FAILED -- saving rejects to file
> > nginx-1.6.2/src/http/modules/ngx_http_ssl_module.c.rej
> > patching file nginx-1.6.2/src/http/modules/ngx_http_ssl_module.h
> > Hunk #1 FAILED at 25.
> > 1 out of 1 hunk FAILED -- saving rejects to file
> > nginx-1.6.2/src/http/modules/ngx_http_ssl_module.h.rej
> > patching file nginx-1.6.2/src/mail/ngx_mail_ssl_module.c
> > Hunk #2 FAILED at 173.
> > Hunk #3 succeeded at 223 (offset 8 lines).
> > Hunk #4 succeeded at 253 (offset 8 lines).
> > 1 out of 4 hunks FAILED -- saving rejects to file
> > nginx-1.6.2/src/mail/ngx_mail_ssl_module.c.rej
> > patching file nginx-1.6.2/src/mail/ngx_mail_ssl_module.h
> > Hunk #1 succeeded at 27 with fuzz 1.
> >
> >
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

PHP fastcgi sninppet for Debian jessie

$
0
0
Hello all,

We are thinking about shipping a php-fastcgi snippet with the upcoming jessie
debian stable release. I wanted to bring that to your attention to avoid
shipping a broken config file that will be difficult to revert.

the snippet:
http://anonscm.debian.org/cgit/collab-maint/nginx.git/tree/debian/conf/snippets/fastcgi-php.conf?h=php-fastcgi&id=87f23062

the default site config that references it:
http://anonscm.debian.org/cgit/collab-maint/nginx.git/tree/debian/conf/sites-available/default?h=php-fastcgi&id=87f23062

Any comments are wellcome!

Thank you,
chris

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: PHP fastcgi sninppet for Debian jessie

$
0
0
On Tuesday 07 October 2014 13:22:56 Christos Trochalakis wrote:
> Hello all,
>
> We are thinking about shipping a php-fastcgi snippet with the upcoming jessie
> debian stable release. I wanted to bring that to your attention to avoid
> shipping a broken config file that will be difficult to revert.
>
> the snippet:
> http://anonscm.debian.org/cgit/collab-maint/nginx.git/tree/debian/conf/snippets/fastcgi-php.conf?h=php-fastcgi&id=87f23062
>
> the default site config that references it:
> http://anonscm.debian.org/cgit/collab-maint/nginx.git/tree/debian/conf/sites-available/default?h=php-fastcgi&id=87f23062
>
> Any comments are welcome!

It looks functional (though the PATH_INFO config looks like a workaround for a
bug, couldn't that bug/feature be fixed instead?).

Why do you have a separate fastcgi.conf while fastcgi_params already exists?
Actually, it seems that Igor Sysoev added this file in December 2009. Igor, why
is the file duplicated if the only difference is in SCRIPT_FILENAME?

Christos, the configuration example allows for execution of xyz.php files in the
document root. Another case is the use of frameworks which have a single
controller, such as FuelPHP and Laravel. In such cases, this nginx configuration
is sufficient (using the old config):

location ~ ^/($|api/|user/|...) {
try_files $uri @php_router;
}
location @php_router {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /path/to/php/controller.php;
fastcgi_pass unix:/var/run/php5-fpm.app.sock;
}

If you do not set fastcgi_param in the location section, then you can also
mention this in the server section (or the http section if location does not set
fastcgi_param either). (fastcgi_param is *not* inherited but overidden by new
directives in a lower block).
--
Kind regards,
Peter
https://lekensteyn.nl

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [calling all patch XPerts !] [PATCH] RSA+DSA+ECC bundles

$
0
0
Hello!

On Tue, Oct 07, 2014 at 11:31:56AM +0400, kyprizel wrote:

> Updating patch for the last nginx isn't a problem - we need to hear from
> Maxim what was the problem with old patch (it wasn't applied that time -
> why should by applied a new one?) to fix it.

http://mailman.nginx.org/pipermail/nginx-devel/2013-November/004475.html

--
Maxim Dounin
http://nginx.org/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx proxy being slow

$
0
0
Hello!

On Mon, Oct 06, 2014 at 09:06:19PM -0400, imran_k wrote:

> We are trying to act as a proxy for a site within the same DMZ. Things seem
> to work fine, except when there is quite a heavy load. There are many CSS
> assets that just hang upon retrieval. Sometimes the full page comes through;
> sometimes just spins forever.
>
> Server: nginx 1.6.1 running on Linux.
> Memory: 18Gb
>
> proxy_buffering on;
> proxy_buffers 256 8k;
> proxy_busy_buffers_size 64;

Just a side note: using 64 bytes for proxy_busy_buffers_size looks
like a bad idea. Additionally, it will be rejected by nginx as
long as you use 8k proxy buffers.

> proxy_temp_file_write_size 64;

Same here. 64 bytes is way too low.

> Under heavy loads, about 1500 requests a second, a page is not completely
> sent back to the browser as some of the CSS resources taking anywhere from 2
> - 10 seconds to return. It will just spin until eventually it gets sent
> back. CPU and memory usage is not dramatically high. Smaller sites return
> without any issue at all.
>
> Do I have the buffering wrong or is there something else at play?

First of all, you may want to find out what causes problems you
observe. From your description I suspect you are actually
debugging listen queue overflows. When using Linux with
net.ipv4.tcp_abort_on_overflow set to 0 (which is the default) it
is not trivial to debug unless you are looking closesly into
tcpdump and/or network stats (try looking into queue sizes in "ss
-nlt").

--
Maxim Dounin
http://nginx.org/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: PHP fastcgi sninppet for Debian jessie

$
0
0
On Tue, Oct 07, 2014 at 12:00:00PM +0000, nginx-devel-request@nginx.org wrote:

Hi there,

> We are thinking about shipping a php-fastcgi snippet with the upcoming jessie
> debian stable release. I wanted to bring that to your attention to avoid
> shipping a broken config file that will be difficult to revert.

I think that any snippet will (potentially) be "right" for certain
circumstances, and will (almost certainly) be "wrong" for other
circumstances.

The hard part is (usually) defining what the one use case that you are
designing for is.

> the snippet:
> http://anonscm.debian.org/cgit/collab-maint/nginx.git/tree/debian/conf/snippets/fastcgi-php.conf?h=php-fastcgi&id=87f23062

fastcgi_split_path_info ^(.+\.php)(/.+)$;

is probably pointless if you have already matched "location ~ \.php$",
as your example config indicates.

try_files $fastcgi_script_name =404;

seems unnecessary to me, unless your fastcgi server is configured to keep
guessing what you might have meant, instead of just doing what you said
and failing when you said something wrong. (I do not know how the
jessie-default php fastcgi server is configured by default.)

And it is almost certainly broken if your fastcgi server is on a separate
machine.

With either of those lines removed, then the

set $path_info $fastcgi_path_info;
fastcgi_param PATH_INFO $path_info;

dance is unnecessary.

I think that

fastcgi_index index.php;

is probably also pointless within "location ~ \.php$".

include fastcgi.conf;

is good; but the user should be aware how their fastcgi server handles
repeated fastcgi_param values, if they are going to add their own that
might clash with anything in that file -- if the fastcgi server only
processes the first value, then they must add their own before this
include; if it processes the last value, then they must add their own
after this include.

> the default site config that references it:
> http://anonscm.debian.org/cgit/collab-maint/nginx.git/tree/debian/conf/sites-available/default?h=php-fastcgi&id=87f23062

listen 80 default_server;
listen [::]:80 default_server ipv6only=on;

Using both lines may or may not be necessary depending on the
configuration of the kernel involved. And they are possibly the
no-configuration default.

The "root" and "index" lines are probably also the compile-time defaults
-- useful as an example "change this if you need to" indication, if the
user does not want to refer to the documentation directly.

The "server_name" directive does not do what the preceding comment says
it does. (Unless I'm missing something.)

Everything within the "location /" block is the default action anyway
(except that this version, I think, hides things from log files). Probably
simpler to erase them. (And maybe to remove the block entirely, but
there are likely reasons not to do that.)

The rest is commented fragments. Does /usr/share/nginx/html/50x.html
exist on the default jessie? It is /usr/share/nginx/www/50x.html on a
wheezy machine I have here.

> Any comments are wellcome!

I hope this helps.

You can go a long way with a mostly-empty nginx.conf.

Note, though, that while I read this list, I'm not a nginx developer.

Cheers,

f
--
Francis Daly francis@daoine.org

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: nginx proxy being slow

$
0
0
Thank you very much for your pointing this out. What are some good starting points for these figures? Some posts I read even say to disable buffering...

The value for tcp_abort_on_overflow is set to 0 (in /proc/).

Thank you

Re: [calling all patch XPerts !] [PATCH] RSA+DSA+ECC bundles

$
0
0
Maxim Dounin wrote:
> Hello!
>
> On Tue, Oct 07, 2014 at 11:31:56AM +0400, kyprizel wrote:
>
>> Updating patch for the last nginx isn't a problem - we need to hear from
>> Maxim what was the problem with old patch (it wasn't applied that time -
>> why should by applied a new one?) to fix it.
>
> http://mailman.nginx.org/pipermail/nginx-devel/2013-November/004475.html

ok, so what is the plan for progression & inclusion ?
do you believe there is enough interest and is the idea supported ?
you think Rob's patch isn't feasible ?
is there anybody who can take over and have they ?

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Caching based on Content Size

$
0
0
Hi List :

Is there any way to restrict object caching bases on their sizes .

For example:

> 3mb pass through
<3 mb cache in Nginx

Any help/hint will be really appreciable .

Thanks,
tRM
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Caching based on Content Size

$
0
0
Hi,
what about $http_content_length and map this variable with directives $*_cache_bypass and $*_no_cache.


Cheers,
w
--- Original message ---
From: "trm asn" <trm.nagios@gmail.com>
Date: 7 October 2014, 17:35:41


> Hi List : 
>
> Is there any way to restrict object caching bases on their sizes .
>
>
> For example: 
>
>
> > 3mb pass through
> <3 mb cache in Nginx
>
>
> Any help/hint will be really appreciable .
>
>
> Thanks,
> tRM 
>
>
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

500 internal server error nginx

$
0
0
Our website is having some major issues and we need help urgently. I appreciate any one's insight into this!

We use the headway theme with wordpress. Our site is very image heavy and while I don't remember using NGINX while we were building, it seems as if we are!

Now, when I try to log into the back end of wordpress it give me this error:

500 Internal Server Error
----
NGINX

And now the whole website is down!

So I am assuming it has something to do with NGINX? I have no idea how to fix this! We have rebotted our server, we have re-installed a back up, etc.

HELP! DESPERATE!

Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE

$
0
0
Hi,

There is an issue in nginx, when it's returning NGX_HTTP_REQUEST_HEADER_TOO_LARGE
in ngx_http_process_request_headers:
When large header buffer is full and the last header size is 1 or 2 bytes more
than NGX_MAX_ERROR_STR - 300, nginx will write 1 or 2 '.' symbols out of large
header buffer, causing unpredictable behavior.

To reproduce it you can send request with total headers size of 'large client buffers size' and
the last header size of 1749. Valgrind will catch this issue:

==10776== Invalid write of size 1
==10776== at 0x426E84: ngx_http_process_request_headers (ngx_http_request.c:1230)
....

The following patch fixes this issue:

# HG changeset patch
# User Daniil Bondarev <bondarev@amazon.com>
# Date 1412401143 25200
# Node ID 2bbb5284ca7ff658ad50254fe0c5bec14247ba75
# Parent 6bbad2e732458bf53771e80c63a654b3d7f61963
Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE

When large header buffer is full and the last header size is 1 or 2 bytes more
than NGX_MAX_ERROR_STR - 300, nginx will write 1 or 2 '.' symbols out of large
header buffer, causing unpredictable behavior.

The fix is, instead of modifying a buffer, just cut the header and print '...'
in log line if header is too large.

diff -r 6bbad2e73245 -r 2bbb5284ca7f src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c Wed Aug 27 20:51:01 2014 +0400
+++ b/src/http/ngx_http_request.c Fri Oct 03 22:39:03 2014 -0700
@@ -1171,6 +1171,7 @@
{
u_char *p;
size_t len;
+ size_t print_len;
ssize_t n;
ngx_int_t rc, rv;
ngx_table_elt_t *h;
@@ -1225,14 +1226,13 @@

len = r->header_in->end - p;

- if (len > NGX_MAX_ERROR_STR - 300) {
- len = NGX_MAX_ERROR_STR - 300;
- p[len++] = '.'; p[len++] = '.'; p[len++] = '.';
- }
+ /* Since log line size is limited to NGX_MAX_ERROR_STR,
+ * nginx has to limit header size it will print into log. */
+ print_len = ngx_min(len, NGX_MAX_ERROR_STR - 300);

ngx_log_error(NGX_LOG_INFO, c->log, 0,
- "client sent too long header line: \"%*s\"",
- len, r->header_name_start);
+ "client sent too long header line: \"%*s%s\"",
+ print_len, p, len != print_len ? "..." : "");

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE

$
0
0
Hello!

On Tue, Oct 07, 2014 at 05:23:57PM +0000, Bondarev, Daniil wrote:

> Hi,
>
> There is an issue in nginx, when it's returning NGX_HTTP_REQUEST_HEADER_TOO_LARGE
> in ngx_http_process_request_headers:
> When large header buffer is full and the last header size is 1 or 2 bytes more
> than NGX_MAX_ERROR_STR - 300, nginx will write 1 or 2 '.' symbols out of large
> header buffer, causing unpredictable behavior.
>
> To reproduce it you can send request with total headers size of 'large client buffers size' and
> the last header size of 1749. Valgrind will catch this issue:
>
> ==10776== Invalid write of size 1
> ==10776== at 0x426E84: ngx_http_process_request_headers (ngx_http_request.c:1230)
> ...
>
> The following patch fixes this issue:
>
> # HG changeset patch
> # User Daniil Bondarev <bondarev@amazon.com>
> # Date 1412401143 25200
> # Node ID 2bbb5284ca7ff658ad50254fe0c5bec14247ba75
> # Parent 6bbad2e732458bf53771e80c63a654b3d7f61963
> Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE
>
> When large header buffer is full and the last header size is 1 or 2 bytes more
> than NGX_MAX_ERROR_STR - 300, nginx will write 1 or 2 '.' symbols out of large
> header buffer, causing unpredictable behavior.
>
> The fix is, instead of modifying a buffer, just cut the header and print '...'
> in log line if header is too large.
>
> diff -r 6bbad2e73245 -r 2bbb5284ca7f src/http/ngx_http_request.c
> --- a/src/http/ngx_http_request.c Wed Aug 27 20:51:01 2014 +0400
> +++ b/src/http/ngx_http_request.c Fri Oct 03 22:39:03 2014 -0700
> @@ -1171,6 +1171,7 @@
> {
> u_char *p;
> size_t len;
> + size_t print_len;
> ssize_t n;
> ngx_int_t rc, rv;
> ngx_table_elt_t *h;
> @@ -1225,14 +1226,13 @@
>
> len = r->header_in->end - p;
>
> - if (len > NGX_MAX_ERROR_STR - 300) {
> - len = NGX_MAX_ERROR_STR - 300;
> - p[len++] = '.'; p[len++] = '.'; p[len++] = '.';
> - }
> + /* Since log line size is limited to NGX_MAX_ERROR_STR,
> + * nginx has to limit header size it will print into log. */
> + print_len = ngx_min(len, NGX_MAX_ERROR_STR - 300);
>
> ngx_log_error(NGX_LOG_INFO, c->log, 0,
> - "client sent too long header line: \"%*s\"",
> - len, r->header_name_start);
> + "client sent too long header line: \"%*s%s\"",
> + print_len, p, len != print_len ? "..." : "");

Thanks for the report.
What do you think about something as simple as:

--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1226,7 +1226,7 @@ ngx_http_process_request_headers(ngx_eve
len = r->header_in->end - p;

if (len > NGX_MAX_ERROR_STR - 300) {
- len = NGX_MAX_ERROR_STR - 300;
+ len = NGX_MAX_ERROR_STR - 300 - 3;
p[len++] = '.'; p[len++] = '.'; p[len++] = '.';
}


?

--
Maxim Dounin
http://nginx.org/

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

RE: Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE

$
0
0
Yep, I've thought about this, but prefer not to modify buffer at all, since it feels more error-prone.
F.E: SB might decide to change number of dots, or reuse last header from this buffer, etc.

Do you feel strongly against printing "..." just at log line?
________________________________________
From: nginx-devel-bounces@nginx.org [nginx-devel-bounces@nginx.org] on behalf of Maxim Dounin [mdounin@mdounin.ru]
Sent: Tuesday, October 07, 2014 10:54 AM
To: nginx-devel@nginx.org
Subject: Re: Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE

Hello!

On Tue, Oct 07, 2014 at 05:23:57PM +0000, Bondarev, Daniil wrote:

> Hi,
>
> There is an issue in nginx, when it's returning NGX_HTTP_REQUEST_HEADER_TOO_LARGE
> in ngx_http_process_request_headers:
> When large header buffer is full and the last header size is 1 or 2 bytes more
> than NGX_MAX_ERROR_STR - 300, nginx will write 1 or 2 '.' symbols out of large
> header buffer, causing unpredictable behavior.
>
> To reproduce it you can send request with total headers size of 'large client buffers size' and
> the last header size of 1749. Valgrind will catch this issue:
>
> ==10776== Invalid write of size 1
> ==10776== at 0x426E84: ngx_http_process_request_headers (ngx_http_request.c:1230)
> ...
>
> The following patch fixes this issue:
>
> # HG changeset patch
> # User Daniil Bondarev <bondarev@amazon.com>
> # Date 1412401143 25200
> # Node ID 2bbb5284ca7ff658ad50254fe0c5bec14247ba75
> # Parent 6bbad2e732458bf53771e80c63a654b3d7f61963
> Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE
>
> When large header buffer is full and the last header size is 1 or 2 bytes more
> than NGX_MAX_ERROR_STR - 300, nginx will write 1 or 2 '.' symbols out of large
> header buffer, causing unpredictable behavior.
>
> The fix is, instead of modifying a buffer, just cut the header and print '...'
> in log line if header is too large.
>
> diff -r 6bbad2e73245 -r 2bbb5284ca7f src/http/ngx_http_request.c
> --- a/src/http/ngx_http_request.c Wed Aug 27 20:51:01 2014 +0400
> +++ b/src/http/ngx_http_request.c Fri Oct 03 22:39:03 2014 -0700
> @@ -1171,6 +1171,7 @@
> {
> u_char *p;
> size_t len;
> + size_t print_len;
> ssize_t n;
> ngx_int_t rc, rv;
> ngx_table_elt_t *h;
> @@ -1225,14 +1226,13 @@
>
> len = r->header_in->end - p;
>
> - if (len > NGX_MAX_ERROR_STR - 300) {
> - len = NGX_MAX_ERROR_STR - 300;
> - p[len++] = '.'; p[len++] = '.'; p[len++] = '.';
> - }
> + /* Since log line size is limited to NGX_MAX_ERROR_STR,
> + * nginx has to limit header size it will print into log. */
> + print_len = ngx_min(len, NGX_MAX_ERROR_STR - 300);
>
> ngx_log_error(NGX_LOG_INFO, c->log, 0,
> - "client sent too long header line: \"%*s\"",
> - len, r->header_name_start);
> + "client sent too long header line: \"%*s%s\"",
> + print_len, p, len != print_len ? "..." : "");

Thanks for the report.
What do you think about something as simple as:

--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1226,7 +1226,7 @@ ngx_http_process_request_headers(ngx_eve
len = r->header_in->end - p;

if (len > NGX_MAX_ERROR_STR - 300) {
- len = NGX_MAX_ERROR_STR - 300;
+ len = NGX_MAX_ERROR_STR - 300 - 3;
p[len++] = '.'; p[len++] = '.'; p[len++] = '.';
}


?

--
Maxim Dounin
http://nginx.org/

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE

$
0
0
Hello!

On Tue, Oct 07, 2014 at 06:10:04PM +0000, Bondarev, Daniil wrote:

> Yep, I've thought about this, but prefer not to modify buffer at all, since it feels more error-prone.
> F.E: SB might decide to change number of dots, or reuse last header from this buffer, etc.

Changing the number of dots is highly unlikely and it will be hard
to do it incorrectly, as the "3" in the patch directly corresponds
to the number. Reuse of the header is highly unlikely too, as it
is the fatal error and the header is known to be incomplete.

> Do you feel strongly against printing "..." just at log line?

Resulting code is way longer than it should be, so I would rather
prefer simplier variant.

On the other hand, looking into this more closely, I tend to think
that ellipsis should be always added to make it clear that the
header logged is incomplete.

Here is a patch:

--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1227,12 +1227,11 @@ ngx_http_process_request_headers(ngx_eve

if (len > NGX_MAX_ERROR_STR - 300) {
len = NGX_MAX_ERROR_STR - 300;
- p[len++] = '.'; p[len++] = '.'; p[len++] = '.';
}

ngx_log_error(NGX_LOG_INFO, c->log, 0,
- "client sent too long header line: \"%*s\"",
- len, r->header_name_start);
+ "client sent too long header line: \"%*s...\"",
+ len, r->header_name_start);

ngx_http_finalize_request(r,
NGX_HTTP_REQUEST_HEADER_TOO_LARGE);

--
Maxim Dounin
http://nginx.org/

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

RE: Prevent buffer overrun on NGX_HTTP_REQUEST_HEADER_TOO_LARGE

$
0
0
Hey Maxim,

> On the other hand, looking into this more closely, I tend to think
> that ellipsis should be always added to make it clear that the
> header logged is incomplete.

Agree, good point!

Patch looks good to me, only note - you can reduce amount of lines by ngx_min,
if you wish:

diff -r 6bbad2e73245 src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c Wed Aug 27 20:51:01 2014 +0400
+++ b/src/http/ngx_http_request.c Tue Oct 07 12:06:36 2014 -0700
@@ -1223,15 +1223,11 @@
return;
}

- len = r->header_in->end - p;
-
- if (len > NGX_MAX_ERROR_STR - 300) {
- len = NGX_MAX_ERROR_STR - 300;
- p[len++] = '.'; p[len++] = '.'; p[len++] = '.';
- }
+ len = ngx_min(r->header_in->end - p,
+ NGX_MAX_ERROR_STR - 300);

ngx_log_error(NGX_LOG_INFO, c->log, 0,
- "client sent too long header line: \"%*s\"",
+ "client sent too long header line: \"%*s...\"",
len, r->header_name_start);

ngx_http_finalize_request(r,


Thanks!

Daniil
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

RE: [Patch] SO_REUSEPORT support from master process

$
0
0
Dear All,

It has been quiet for a while on this patch. I am checking to see if there is any questions/feedbacks/concerns we need to address?

Please let me know. Thanks very much for your help!

Yingqi

-----Original Message-----
From: nginx-devel-bounces@nginx.org [mailto:nginx-devel-bounces@nginx.org] On Behalf Of Lu, Yingqi
Sent: Wednesday, August 27, 2014 10:33 AM
To: nginx-devel@nginx.org
Subject: RE: [Patch] SO_REUSEPORT support from master process

Dear All,

I am resending this patch with plain text instead of HTML format. I will also post the patch at the end of this email. Hope this will be easier for all of you to review. Please let me know if you have trouble viewing the message or the patch itself. This is our first time submitting the patch here. Your feedback and suggestions are highly appreciated.

The "SO_REUSEPORT support for listen sockets support" patches submitted by Sepherosa Ziehau are posted and discussed in [1] and [2]. Last update on the threads was 09/05/2013 and the patch is not included in the current Nginx code. Reading from the discussion, my understanding is that his patch makes a dedicated listen socket for each of the child process. In order to make sure at any given time there is always a listen socket available, the patch makes the first worker process different/special than the rest.

Here, I am proposing a simpler way to enable the SO_REUSEPORT support. It is just to create and configure certain number of listen sockets in the master process with SO_REUSEPORT enabled. All the children processes can inherit. In this case, we do not need to worry about ensuring 1 available listen socket at the run time. The number of the listen sockets to be created is calculated based on the number of active CPU threads. With big system that has more CPU threads (where we have the scalability issue), there are more duplicated listen sockets created to improve the throughput and scalability. With system that has only 8 or less CPU threads, there will be only 1 listen socket. This makes sure duplicated listen sockets only being created when necessary. In case that SO_REUSEPORT is not supported by the OS, it will fall back to the default/original behavior (this is tested on Linux kernel 3.8.8 where SO_REUSEPORT is not supported).

This prototype patch has been tested on an Intel modern dual socket platform with a three tier open source web server workload (PHP+Nginx/memcached/MySQL). The web server has 2 IP network interfaces configured for testing. The Linux kernel used for testing is 3.13.9. Data show:

Case 1: with single listen statement (Listen 80) specified in the configuration file, there is 46.3% throughout increase.
Case 2: with dual listen statements (for example, Listen 192.168.1.1:80 and Listen 192.168.1.2:80), there is 10% throughput increase.

Both testing cases keep everything the same except the patch itself to get above result.

The reason that Case1 has bigger performance gains is that Case1 by default only has 1 listen socket while Case2 by default already has 2.

Please review it and let me know your questions and comments. Thanks very much for your time reviewing the patch.

Thanks,
Yingqi Lu

[1] http://forum.nginx.org/read.php?29,241283,241283
[2] http://forum.nginx.org/read.php?29,241470,241470

# HG changeset patch
# User Yingqi Lu <Yingqi.Lu@intel.com>
# Date 1408145210 25200
# Fri Aug 15 16:26:50 2014 -0700
# Node ID d9c7259d275dbcae8a0d001ee9703b13312b3263
# Parent 6edcb183e62d610808addebbd18249abb7224a0a
These are the patch files for SO_REUSEPORT support.

diff -r 6edcb183e62d -r d9c7259d275d ngx_connection.c
--- a/ngx_connection.c Fri Aug 15 16:25:32 2014 -0700
+++ b/ngx_connection.c Fri Aug 15 16:26:50 2014 -0700
@@ -304,7 +304,7 @@
ngx_int_t
ngx_open_listening_sockets(ngx_cycle_t *cycle) {
- int reuseaddr;
+ int reuseaddr, reuseport;
ngx_uint_t i, tries, failed;
ngx_err_t err;
ngx_log_t *log;
@@ -312,6 +312,7 @@
ngx_listening_t *ls;

reuseaddr = 1;
+ reuseport = 1;
#if (NGX_SUPPRESS_WARN)
failed = 0;
#endif
@@ -370,6 +371,24 @@
return NGX_ERROR;
}

+ if (so_reuseport_enabled)
+ {
+ if (setsockopt(s, SOL_SOCKET, SO_REUSEPORT,
+ (const void *) &reuseport, sizeof(int))
+ == -1) {
+ ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno,
+ "setsockopt(SO_REUSEPORT) %V failed",
+ &ls[i].addr_text);
+ if (ngx_close_socket(s) == -1) {
+ ngx_log_error(NGX_LOG_EMERG, log, ngx_socket_errno,
+ ngx_close_socket_n " %V failed",
+ &ls[i].addr_text);
+ }
+
+ return NGX_ERROR;
+ }
+ }
+
#if (NGX_HAVE_INET6 && defined IPV6_V6ONLY)

if (ls[i].sockaddr->sa_family == AF_INET6) { diff -r 6edcb183e62d -r d9c7259d275d ngx_cycle.c
--- a/ngx_cycle.c Fri Aug 15 16:25:32 2014 -0700
+++ b/ngx_cycle.c Fri Aug 15 16:26:50 2014 -0700
@@ -25,7 +25,7 @@

ngx_uint_t ngx_test_config;
ngx_uint_t ngx_quiet_mode;
-
+ngx_uint_t so_reuseport_enabled;
#if (NGX_THREADS)
ngx_tls_key_t ngx_core_tls_key;
#endif
@@ -55,6 +55,34 @@
ngx_core_module_t *module;
char hostname[NGX_MAXHOSTNAMELEN];

+ ngx_uint_t j, num_cores, num_dup_sockets, orig_nelts;
+ ngx_socket_t temp_s;
+ int one = 1;
+ so_reuseport_enabled = 0;
+ temp_s = ngx_socket(AF_INET, SOCK_STREAM, 0); #ifndef SO_REUSEPORT
+#define SO_REUSEPORT 15 #endif
+ if (setsockopt(temp_s, SOL_SOCKET, SO_REUSEPORT,
+ (const void *) &one, sizeof(int)) == 0) {
+ so_reuseport_enabled = 1;
+ }
+ ngx_close_socket(temp_s);
+
+ if (so_reuseport_enabled) {
+#ifdef _SC_NPROCESSORS_ONLN
+ num_cores = sysconf(_SC_NPROCESSORS_ONLN); #else
+ num_cores = 1;
+#endif
+ if (num_cores > 8) {
+ num_dup_sockets = num_cores/8;
+ } else {
+ num_dup_sockets = 1;
+ }
+ } else {
+ num_dup_sockets = 1;
+ }
ngx_timezone_update();

/* force localtime update with a new timezone */ @@ -114,7 +142,7 @@
}


- n = old_cycle->paths.nelts ? old_cycle->paths.nelts : 10;
+ n = old_cycle->paths.nelts ? old_cycle->paths.nelts : 10 *
+ num_dup_sockets;

cycle->paths.elts = ngx_pcalloc(pool, n * sizeof(ngx_path_t *));
if (cycle->paths.elts == NULL) {
@@ -164,7 +192,7 @@
return NULL;
}

- n = old_cycle->listening.nelts ? old_cycle->listening.nelts : 10;
+ n = old_cycle->listening.nelts ? old_cycle->listening.nelts : 10 *
+ num_dup_sockets;

cycle->listening.elts = ngx_pcalloc(pool, n * sizeof(ngx_listening_t));
if (cycle->listening.elts == NULL) { @@ -231,7 +259,7 @@

ngx_memzero(&conf, sizeof(ngx_conf_t));
/* STUB: init array ? */
- conf.args = ngx_array_create(pool, 10, sizeof(ngx_str_t));
+ conf.args = ngx_array_create(pool, (10 * num_dup_sockets),
+ sizeof(ngx_str_t));
if (conf.args == NULL) {
ngx_destroy_pool(pool);
return NULL;
@@ -575,7 +603,15 @@
#endif
}
}
+ orig_nelts = cycle->listening.nelts;
+ cycle->listening.nelts = cycle->listening.nelts * num_dup_sockets;

+ ls = cycle->listening.elts;
+ for (i = 0; i < num_dup_sockets; i++) {
+ for(j = 0; j < orig_nelts; j++) {
+ ls[j + i * orig_nelts] = ls[j];
+ }
+ }
if (ngx_open_listening_sockets(cycle) != NGX_OK) {
goto failed;
}
@@ -747,7 +783,7 @@
exit(1);
}

- n = 10;
+ n = 10 * num_dup_sockets;
ngx_old_cycles.elts = ngx_pcalloc(ngx_temp_pool,
n * sizeof(ngx_cycle_t *));
if (ngx_old_cycles.elts == NULL) { diff -r 6edcb183e62d -r d9c7259d275d ngx_cycle.h
--- a/ngx_cycle.h Fri Aug 15 16:25:32 2014 -0700
+++ b/ngx_cycle.h Fri Aug 15 16:26:50 2014 -0700
@@ -136,6 +136,7 @@
extern ngx_module_t ngx_core_module;
extern ngx_uint_t ngx_test_config;
extern ngx_uint_t ngx_quiet_mode;
+extern ngx_uint_t so_reuseport_enabled;
#if (NGX_THREADS)
extern ngx_tls_key_t ngx_core_tls_key;
#endif

1. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>