Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Re: newbie: nginx rtmp module

$
0
0
Grrr that swift keyboard. There is no space before the capital V.

nginx -V

I'd be surprised if that command doesn't work now. Any reason you haven't upgraded to Centos 7?



  Original Message  
From: nginx-forum@forum.nginx.org
Sent: March 7, 2018 1:53 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: newbie: nginx rtmp module

thankyou for your feedback gariac.

# nginx - V
nginx: invalid option: "V"

I think this may be because I have the 'yum install' version of nginx and
not the tarball. TIA for any further ideas.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278950,278952#msg-278952

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Routing based on ALPN

$
0
0
> below is the initial version of patch that creates the
> "$ssl_preread_alpn_protocols" variable; the content is a comma-separated
> list of protocols, sent by client in ALPN extension, if present.
>
> Any feedback is appretiated.
>

I have just tested this patch and can confirm it's working perfectly fine.

The patch was applied against this commit: https://github.com/nginx/nginx/commit/83dceda8688fcba6da9fd12f6480606563d7b7a3
And I was using LibreSSL.

I've set up three upstream servers for tests, two using node.js (HTTPS) and one Prosody (XMPP server):

map $ssl_preread_alpn_protocols $upstream {
default node1;
"h2,http/1.1" node2;
"xmpp-client" prosody;
}

Curling with no ALPN correctly returns answer from node1:

> curl -k -i --no-alpn https://docker.local
HTTP/1.1 200 OK
Date: Wed, 07 Mar 2018 11:24:26 GMT
Connection: keep-alive
Content-Length: 23

Everything works: node1

Curling with default configuration (ALPN: h2,http/1.1) also works:

> curl -k -i https://docker.local
HTTP/1.1 200 OK
Date: Wed, 07 Mar 2018 11:24:43 GMT
Connection: keep-alive
Content-Length: 23

Everything works: node2

Then I tested XMPP by adding an SRV record:

> dig _xmpps-client._tcp.testing.metacode.biz SRV
;; ANSWER SECTION:
_xmpps-client._tcp.testing.metacode.biz. 119 IN SRV 1 1 443 docker.local.

And using Gajim to connect to testing.metacode.biz. It worked.

Nginx (web_1) logs correctly show all connection attempts with ALPN values:

prosody_1 | c2s2564890 info Client connected
web_1 | 192.168.99.1 xmpp-client [07/Mar/2018:11:21:58 +0000] TCP 200 2335 871 1.566
web_1 | 192.168.99.1 [07/Mar/2018:11:24:26 +0000] TCP 200 1546 327 0.298
web_1 | 192.168.99.1 h2,http/1.1 [07/Mar/2018:11:24:35 +0000] TCP 200 1539 262 0.324
web_1 | 192.168.99.1 h2,http/1.1 [07/Mar/2018:11:24:43 +0000] TCP 200 1539 262 0.293
prosody_1 | c2s2564890 info Authenticated as wiktor@testing.metacode.biz

I've used log_format basic '$remote_addr $ssl_preread_alpn_protocols [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time';

This looks *very good*, thanks for your time!

Kind regards,
Wiktor

--
*/metacode/*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Routing based on ALPN

$
0
0
On 07/03/2018 14:38, Wiktor Kwapisiewicz via nginx wrote:
[...]
> This looks *very good*, thanks for your time!

Thanks for your testing, Wiktor.

--
Maxim Konovalov
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[nginx] Improved code readablity.

$
0
0
details: http://hg.nginx.org/nginx/rev/0b1eb40de6da
branches:
changeset: 7226:0b1eb40de6da
user: Ruslan Ermilov <ru@nginx.com>
date: Wed Mar 07 18:28:12 2018 +0300
description:
Improved code readablity.

No functional changes.

diffstat:

src/http/ngx_http_variables.c | 8 ++++++--
src/stream/ngx_stream_variables.c | 8 ++++++--
2 files changed, 12 insertions(+), 4 deletions(-)

diffs (50 lines):

diff -r e80930e5e422 -r 0b1eb40de6da src/http/ngx_http_variables.c
--- a/src/http/ngx_http_variables.c Mon Mar 05 21:35:13 2018 +0300
+++ b/src/http/ngx_http_variables.c Wed Mar 07 18:28:12 2018 +0300
@@ -429,7 +429,9 @@ ngx_http_add_variable(ngx_conf_t *cf, ng
return NULL;
}

- v->flags &= flags | ~NGX_HTTP_VAR_WEAK;
+ if (!(flags & NGX_HTTP_VAR_WEAK)) {
+ v->flags &= ~NGX_HTTP_VAR_WEAK;
+ }

return v;
}
@@ -494,7 +496,9 @@ ngx_http_add_prefix_variable(ngx_conf_t
return NULL;
}

- v->flags &= flags | ~NGX_HTTP_VAR_WEAK;
+ if (!(flags & NGX_HTTP_VAR_WEAK)) {
+ v->flags &= ~NGX_HTTP_VAR_WEAK;
+ }

return v;
}
diff -r e80930e5e422 -r 0b1eb40de6da src/stream/ngx_stream_variables.c
--- a/src/stream/ngx_stream_variables.c Mon Mar 05 21:35:13 2018 +0300
+++ b/src/stream/ngx_stream_variables.c Wed Mar 07 18:28:12 2018 +0300
@@ -161,7 +161,9 @@ ngx_stream_add_variable(ngx_conf_t *cf,
return NULL;
}

- v->flags &= flags | ~NGX_STREAM_VAR_WEAK;
+ if (!(flags & NGX_STREAM_VAR_WEAK)) {
+ v->flags &= ~NGX_STREAM_VAR_WEAK;
+ }

return v;
}
@@ -227,7 +229,9 @@ ngx_stream_add_prefix_variable(ngx_conf_
return NULL;
}

- v->flags &= flags | ~NGX_STREAM_VAR_WEAK;
+ if (!(flags & NGX_STREAM_VAR_WEAK)) {
+ v->flags &= ~NGX_STREAM_VAR_WEAK;
+ }

return v;
}
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

location blocks, and if conditions in server context

$
0
0
Hi guys,

I have a few hundred nginx zones, where I try to remove as much duplicate code as possible, and inherit as much as possible to prevent nginx from consuming memory (and also to keep things clean).

However I came across something today, that I don’t know how to get my head around without duplicating code, even within a single server context.

I have a set of distributed nginx servers, all these requires SSL certificates, where I use Let’s Encrypt to do this.
When doing the Let’s Encrypt validation, it uses a path such as /.well-known/acme-challenge/<hash>

For this, I made a location block such as:

location ~* /.well-known {
proxy_pass http://letsencrypt.validation.backend.com$request_uri;
}

Basically, I proxy_pass to the backend where I actually run the acme client – works great.

However, I have an option to force a redirect from http to https, and I’ve implemented that by doing an if condition on the server block level (so not within a location):

if ($sslproxy_protocol = "http") {
return 301 https://$host$request_uri;
}

This means I have something like:

1: location ~* /.well-known
2: if condition doing redirect if protocol is http
3: location /
4: location /api
5: location /test

All my templates include 1 to 3, and *might* have additional locations.
I’ve decided to not put e.g. location /api inside the location / - because there’s things I don’t want to inherit, thus keeping them at the same “level”, and not a location context inside a location context.
Things I don’t want to inherit, is stuff such as headers, max_ranges directive etc.

My issue is – because of this if condition that does the redirect to https – it also applies to my location ~* /.well-known – thus causing a redirect, and I want to prevent this, since it breaks the Let’s Encrypt validation (they do not accept 301 redirects).

A solution would be to move the if condition into each location block that I want to have redirected, but then I start repeating myself 1, 2 or even 10 times – which I don’t wanna do.

Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ?

A config example is seen below:

server {
listen 80;
listen 443 ssl http2;

server_name secure.domain.com;

access_log /var/log/nginx/secure.domain.com main;

location ~* /.well-known {
proxy_pass http://letsencrypt.validation.backend.com$request_uri;
}

if ($sslproxy_protocol = "http") {
return 301 https://$host$request_uri;
}

location / {

expires 10m;
etag off;

proxy_ignore_client_abort on;
proxy_intercept_errors on;
proxy_next_upstream error timeout invalid_header;
proxy_ignore_headers Set-Cookie Vary X-Accel-Expires Expires Cache-Control;
more_clear_headers Set-Cookie Cookie Upgrade;

proxy_cache one;
proxy_cache_min_uses 1;
proxy_cache_lock off;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

proxy_cache_valid 200 10m;
proxy_cache_valid any 1m;

proxy_cache_revalidate on;
proxy_ssl_server_name on;

include /etc/nginx/server.conf;

proxy_set_header Host backend-host.com;

proxy_cache_key "http://backend-host.com-1-$request_uri";
proxy_pass http://backend-host.com$request_uri;

proxy_redirect off;
}
}

Thank you in advance!

Best Regards,
Lucas Rolff
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Проксирование SSL трафика для клиентов с авторизацией по сертификату

$
0
0
Решил сделать попроще - авторизацию с помощь сертификата на первом сервере nginx, а дальше уже без SSL, но по закрытому каналу VPN. Так работает.

Re: location blocks, and if conditions in server context

$
0
0
I agree that avoiding if is a good thing. But avoiding duplication isn’t always good.

Have you considered a model where your configuration file is generated with a templating engine? The input file that you modify to add/remove/change configurations could be free of duplication but the conf file that nginx reads could be concrete and verbose

Sent from my iPhone

> On Mar 7, 2018, at 11:55, Lucas Rolff <lucas@lucasrolff.com> wrote:
>
> Hi guys,
>
> I have a few hundred nginx zones, where I try to remove as much duplicate code as possible, and inherit as much as possible to prevent nginx from consuming memory (and also to keep things clean).
>
> However I came across something today, that I don’t know how to get my head around without duplicating code, even within a single server context.
>
> I have a set of distributed nginx servers, all these requires SSL certificates, where I use Let’s Encrypt to do this.
> When doing the Let’s Encrypt validation, it uses a path such as /.well-known/acme-challenge/<hash>
>
> For this, I made a location block such as:
>
> location ~* /.well-known {
> proxy_pass http://letsencrypt.validation.backend.com$request_uri;
> }
>
> Basically, I proxy_pass to the backend where I actually run the acme client – works great.
>
> However, I have an option to force a redirect from http to https, and I’ve implemented that by doing an if condition on the server block level (so not within a location):
>
> if ($sslproxy_protocol = "http") {
> return 301 https://$host$request_uri;
> }
>
> This means I have something like:
>
> 1: location ~* /.well-known
> 2: if condition doing redirect if protocol is http
> 3: location /
> 4: location /api
> 5: location /test
>
> All my templates include 1 to 3, and *might* have additional locations.
> I’ve decided to not put e.g. location /api inside the location / - because there’s things I don’t want to inherit, thus keeping them at the same “level”, and not a location context inside a location context.
> Things I don’t want to inherit, is stuff such as headers, max_ranges directive etc.
>
> My issue is – because of this if condition that does the redirect to https – it also applies to my location ~* /.well-known – thus causing a redirect, and I want to prevent this, since it breaks the Let’s Encrypt validation (they do not accept 301 redirects).
>
> A solution would be to move the if condition into each location block that I want to have redirected, but then I start repeating myself 1, 2 or even 10 times – which I don’t wanna do.
>
> Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ?
>
> A config example is seen below:
>
> server {
> listen 80;
> listen 443 ssl http2;
>
> server_name secure.domain.com;
>
> access_log /var/log/nginx/secure.domain.com main;
>
> location ~* /.well-known {
> proxy_pass http://letsencrypt.validation.backend.com$request_uri;
> }
>
> if ($sslproxy_protocol = "http") {
> return 301 https://$host$request_uri;
> }
>
> location / {
>
> expires 10m;
> etag off;
>
> proxy_ignore_client_abort on;
> proxy_intercept_errors on;
> proxy_next_upstream error timeout invalid_header;
> proxy_ignore_headers Set-Cookie Vary X-Accel-Expires Expires Cache-Control;
> more_clear_headers Set-Cookie Cookie Upgrade;
>
> proxy_cache one;
> proxy_cache_min_uses 1;
> proxy_cache_lock off;
> proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
>
> proxy_cache_valid 200 10m;
> proxy_cache_valid any 1m;
>
> proxy_cache_revalidate on;
> proxy_ssl_server_name on;
>
> include /etc/nginx/server.conf;
>
> proxy_set_header Host backend-host.com;
>
> proxy_cache_key "http://backend-host.com-1-$request_uri";
> proxy_pass http://backend-host.com$request_uri;
>
> proxy_redirect off;
> }
> }
>
> Thank you in advance!
>
> Best Regards,
> Lucas Rolff
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: location blocks, and if conditions in server context

$
0
0
Hi peter,

I generate configs already using a template engine (more specific Laravel Blade), so creating the functionality in the template is easy, however, I generally don’t like having server blocks that can be 100s of lines because of repeating things

I don’t know the internals of nginx fully, how it uses memory when storing configs, but I would assume that inheritance is better than duplication in terms of memory usage.

I’m just wondering if there’s a way I can avoid the if condition within the location blocks.

- lucas

Get Outlook for iOShttps://aka.ms/o0ukef
________________________________
From: nginx <nginx-bounces@nginx.org> on behalf of Peter Booth <peter_booth@me.com>
Sent: Wednesday, March 7, 2018 11:08:40 PM
To: nginx@nginx.org
Subject: Re: location blocks, and if conditions in server context

I agree that avoiding if is a good thing. But avoiding duplication isn’t always good.

Have you considered a model where your configuration file is generated with a templating engine? The input file that you modify to add/remove/change configurations could be free of duplication but the conf file that nginx reads could be concrete and verbose

Sent from my iPhone

On Mar 7, 2018, at 11:55, Lucas Rolff <lucas@lucasrolff.com<mailto:lucas@lucasrolff.com>> wrote:

Hi guys,

I have a few hundred nginx zones, where I try to remove as much duplicate code as possible, and inherit as much as possible to prevent nginx from consuming memory (and also to keep things clean).

However I came across something today, that I don’t know how to get my head around without duplicating code, even within a single server context.

I have a set of distributed nginx servers, all these requires SSL certificates, where I use Let’s Encrypt to do this.
When doing the Let’s Encrypt validation, it uses a path such as /.well-known/acme-challenge/<hash>

For this, I made a location block such as:

location ~* /.well-known {
proxy_pass http://letsencrypt.validation.backend.com$request_uri;
}

Basically, I proxy_pass to the backend where I actually run the acme client – works great.

However, I have an option to force a redirect from http to https, and I’ve implemented that by doing an if condition on the server block level (so not within a location):

if ($sslproxy_protocol = "http") {
return 301 https://$host$request_uri;
}

This means I have something like:

1: location ~* /.well-known
2: if condition doing redirect if protocol is http
3: location /
4: location /api
5: location /test

All my templates include 1 to 3, and *might* have additional locations.
I’ve decided to not put e.g. location /api inside the location / - because there’s things I don’t want to inherit, thus keeping them at the same “level”, and not a location context inside a location context.
Things I don’t want to inherit, is stuff such as headers, max_ranges directive etc.

My issue is – because of this if condition that does the redirect to https – it also applies to my location ~* /.well-known – thus causing a redirect, and I want to prevent this, since it breaks the Let’s Encrypt validation (they do not accept 301 redirects).

A solution would be to move the if condition into each location block that I want to have redirected, but then I start repeating myself 1, 2 or even 10 times – which I don’t wanna do.

Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ?

A config example is seen below:

server {
listen 80;
listen 443 ssl http2;

server_name secure.domain.comhttp://secure.domain.com;

access_log /var/log/nginx/secure.domain.comhttp://secure.domain.com main;

location ~* /.well-known {
proxy_pass http://letsencrypt.validation.backend.com$request_uri;
}

if ($sslproxy_protocol = "http") {
return 301 https://$host$request_uri;
}

location / {

expires 10m;
etag off;

proxy_ignore_client_abort on;
proxy_intercept_errors on;
proxy_next_upstream error timeout invalid_header;
proxy_ignore_headers Set-Cookie Vary X-Accel-Expires Expires Cache-Control;
more_clear_headers Set-Cookie Cookie Upgrade;

proxy_cache one;
proxy_cache_min_uses 1;
proxy_cache_lock off;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

proxy_cache_valid 200 10m;
proxy_cache_valid any 1m;

proxy_cache_revalidate on;
proxy_ssl_server_name on;

include /etc/nginx/server.conf;

proxy_set_header Host backend-host.comhttp://backend-host.com;

proxy_cache_key "http://backend-host.com-1-$request_uri";
proxy_pass http://backend-host.com$request_uri;

proxy_redirect off;
}
}

Thank you in advance!

Best Regards,
Lucas Rolff
_______________________________________________
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: [PATCH] HTTP/2: make http2 server support http1

$
0
0
Here is a more simple patch. And the temp buffer has also been freed.

Please make your comments. Thanks.


diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
index 89cfe77a..d97952bc 100644
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -17,6 +17,10 @@ static ssize_t ngx_http_read_request_header(ngx_http_request_t *r);
static ngx_int_t ngx_http_alloc_large_header_buffer(ngx_http_request_t *r,
ngx_uint_t request_line);

+#if (NGX_HTTP_V2)
+static void ngx_http_wait_v2_preface_handler(ngx_event_t *rev);
+#endif
+
static ngx_int_t ngx_http_process_header_line(ngx_http_request_t *r,
ngx_table_elt_t *h, ngx_uint_t offset);
static ngx_int_t ngx_http_process_unique_header_line(ngx_http_request_t *r,
@@ -321,7 +325,7 @@ ngx_http_init_connection(ngx_connection_t *c)

#if (NGX_HTTP_V2)
if (hc->addr_conf->http2) {
- rev->handler = ngx_http_v2_init;
+ rev->handler = ngx_http_wait_v2_preface_handler;
}
#endif

@@ -377,6 +381,110 @@ ngx_http_init_connection(ngx_connection_t *c)
}


+#if (NGX_HTTP_V2)
+static void
+ngx_http_wait_v2_preface_handler(ngx_event_t *rev)
+{
+ size_t size;
+ ssize_t n;
+ ngx_buf_t *b;
+ ngx_connection_t *c;
+ static const u_char preface[] = "PRI";
+
+ c = rev->data;
+ size = sizeof(preface) - 1;
+
+ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
+ "http wait h2 preface handler");
+
+ if (rev->timedout) {
+ ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out");
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ if (c->close) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ b = c->buffer;
+
+ if (b == NULL) {
+ b = ngx_create_temp_buf(c->pool, size);
+ if (b == NULL) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ c->buffer = b;
+
+ } else if (b->start == NULL) {
+
+ b->start = ngx_palloc(c->pool, size);
+ if (b->start == NULL) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ b->pos = b->start;
+ b->last = b->start;
+ b->end = b->last + size;
+ }
+
+ n = c->recv(c, b->last, size);
+
+ if (n == NGX_AGAIN) {
+
+ if (!rev->timer_set) {
+ ngx_add_timer(rev, c->listening->post_accept_timeout);
+ ngx_reusable_connection(c, 1);
+ }
+
+ if (ngx_handle_read_event(rev, 0) != NGX_OK) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ /*
+ * We are trying to not hold c->buffer's memory for an idle connection.
+ */
+
+ if (ngx_pfree(c->pool, b->start) == NGX_OK) {
+ b->start = NULL;
+ }
+
+ return;
+ }
+
+ if (n == NGX_ERROR) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ if (n == 0) {
+ ngx_log_error(NGX_LOG_INFO, c->log, 0,
+ "client closed connection");
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ b->last += n;
+
+ if (b->last == b->end) {
+ /* b will be freed in ngx_http_v2_init/ngx_http_wait_request_handler */
+
+ if (ngx_strncmp(b->start, preface, size) == 0) {
+ ngx_http_v2_init(rev);
+ } else {
+ rev->handler = ngx_http_wait_request_handler;
+ ngx_http_wait_request_handler(rev);
+ }
+ }
+}
+#endif
+
+
static void
ngx_http_wait_request_handler(ngx_event_t *rev)
{
@@ -430,6 +538,22 @@ ngx_http_wait_request_handler(ngx_event_t *rev)
b->pos = b->start;
b->last = b->start;
b->end = b->last + size;
+ } else {
+
+ p = ngx_palloc(c->pool, size);
+ if (p == NULL) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ n = b->last - b->start;
+ ngx_memcpy(p, b->start, n);
+ ngx_pfree(c->pool, b->start);
+
+ b->start = p;
+ b->pos = b->start;
+ b->last = b->start + n;
+ b->end = b->last + size;
}

n = c->recv(c, b->last, size);
diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c
index d9df0f90..e36bf382 100644
--- a/src/http/v2/ngx_http_v2.c
+++ b/src/http/v2/ngx_http_v2.c
@@ -231,6 +231,8 @@ static ngx_http_v2_parse_header_t ngx_http_v2_parse_headers[] = {
void
ngx_http_v2_init(ngx_event_t *rev)
{
+ size_t size;
+ ngx_buf_t *b;
ngx_connection_t *c;
ngx_pool_cleanup_t *cln;
ngx_http_connection_t *hc;
@@ -262,6 +264,23 @@ ngx_http_v2_init(ngx_event_t *rev)
return;
}

+ b = c->buffer;
+
+ if (b != NULL) {
+ size = b->last - b->start;
+
+ if (size > h2mcf->recv_buffer_size) {
+ size = h2mcf->recv_buffer_size;
+ }
+
+ ngx_memcpy(h2mcf->recv_buffer, b->start, size);
+ h2c->state.buffer_used = size;
+
+ ngx_pfree(c->pool, b->start);
+ ngx_pfree(c->pool, b);
+ c->buffer = NULL;
+ }
+
h2c->connection = c;
h2c->http_connection = hc;

@@ -381,13 +400,15 @@ ngx_http_v2_read_handler(ngx_event_t *rev)
h2mcf = ngx_http_get_module_main_conf(h2c->http_connection->conf_ctx,
ngx_http_v2_module);

- available = h2mcf->recv_buffer_size - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;
+ available = h2mcf->recv_buffer_size - h2c->state.buffer_used - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;

do {
p = h2mcf->recv_buffer;

- ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
end = p + h2c->state.buffer_used;
+ if (h2c->state.buffer_used == 0) {
+ ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
+ }

n = c->recv(c, end, available);



> On Mar 6, 2018, at 10:19, Haitao Lv <i@lvht.net> wrote:
>
> Hello, here is another patch(more sample) according Maxim Dounin advice.
>
> Please offer your comment. Thanks.
>
> diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
> index 89cfe77a..71bc7b59 100644
> --- a/src/http/ngx_http_request.c
> +++ b/src/http/ngx_http_request.c
> @@ -17,6 +17,10 @@ static ssize_t ngx_http_read_request_header(ngx_http_request_t *r);
> static ngx_int_t ngx_http_alloc_large_header_buffer(ngx_http_request_t *r,
> ngx_uint_t request_line);
>
> +#if (NGX_HTTP_V2)
> +static void ngx_http_wait_v2_preface_handler(ngx_event_t *rev);
> +#endif
> +
> static ngx_int_t ngx_http_process_header_line(ngx_http_request_t *r,
> ngx_table_elt_t *h, ngx_uint_t offset);
> static ngx_int_t ngx_http_process_unique_header_line(ngx_http_request_t *r,
> @@ -321,7 +325,7 @@ ngx_http_init_connection(ngx_connection_t *c)
>
> #if (NGX_HTTP_V2)
> if (hc->addr_conf->http2) {
> - rev->handler = ngx_http_v2_init;
> + rev->handler = ngx_http_wait_v2_preface_handler;
> }
> #endif
>
> @@ -377,6 +381,108 @@ ngx_http_init_connection(ngx_connection_t *c)
> }
>
>
> +#if (NGX_HTTP_V2)
> +static void
> +ngx_http_wait_v2_preface_handler(ngx_event_t *rev)
> +{
> + size_t size;
> + ssize_t n;
> + ngx_buf_t *b;
> + ngx_connection_t *c;
> +
> + c = rev->data;
> +
> + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
> + "http wait h2 preface handler");
> +
> + if (rev->timedout) {
> + ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out");
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + if (c->close) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + size = 5 /* strlen("PRI *") */;
> +
> + b = c->buffer;
> +
> + if (b == NULL) {
> + b = ngx_create_temp_buf(c->pool, size);
> + if (b == NULL) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + c->buffer = b;
> +
> + } else if (b->start == NULL) {
> +
> + b->start = ngx_palloc(c->pool, size);
> + if (b->start == NULL) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + b->pos = b->start;
> + b->last = b->start;
> + b->end = b->last + size;
> + }
> +
> + n = c->recv(c, b->last, size);
> +
> + if (n == NGX_AGAIN) {
> +
> + if (!rev->timer_set) {
> + ngx_add_timer(rev, c->listening->post_accept_timeout);
> + ngx_reusable_connection(c, 1);
> + }
> +
> + if (ngx_handle_read_event(rev, 0) != NGX_OK) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + /*
> + * We are trying to not hold c->buffer's memory for an idle connection.
> + */
> +
> + if (ngx_pfree(c->pool, b->start) == NGX_OK) {
> + b->start = NULL;
> + }
> +
> + return;
> + }
> +
> + if (n == NGX_ERROR) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + if (n == 0) {
> + ngx_log_error(NGX_LOG_INFO, c->log, 0,
> + "client closed connection");
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + b->last += n;
> +
> + if (b->last == b->end) {
> + if (ngx_strncmp(b->start, "PRI *", 5) == 0) {
> + ngx_http_v2_init_with_buf(rev, b);
> + } else {
> + rev->handler = ngx_http_wait_request_handler;
> + ngx_http_wait_request_handler(rev);
> + }
> + }
> +}
> +#endif
> +
> +
> static void
> ngx_http_wait_request_handler(ngx_event_t *rev)
> {
> @@ -430,6 +536,21 @@ ngx_http_wait_request_handler(ngx_event_t *rev)
> b->pos = b->start;
> b->last = b->start;
> b->end = b->last + size;
> + } else {
> +
> + p = ngx_palloc(c->pool, size);
> + if (p == NULL) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + n = b->last - b->start;
> + ngx_memcpy(p, b->start, n);
> +
> + b->start = p;
> + b->pos = b->start;
> + b->last = b->start + n;
> + b->end = b->last + size;
> }
>
> n = c->recv(c, b->last, size);
> diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c
> index d9df0f90..a990c96f 100644
> --- a/src/http/v2/ngx_http_v2.c
> +++ b/src/http/v2/ngx_http_v2.c
> @@ -231,6 +231,14 @@ static ngx_http_v2_parse_header_t ngx_http_v2_parse_headers[] = {
> void
> ngx_http_v2_init(ngx_event_t *rev)
> {
> + ngx_http_v2_init_with_buf(rev, NULL);
> +}
> +
> +
> +void
> +ngx_http_v2_init_with_buf(ngx_event_t *rev, ngx_buf_t *buf)
> +{
> + size_t size;
> ngx_connection_t *c;
> ngx_pool_cleanup_t *cln;
> ngx_http_connection_t *hc;
> @@ -262,6 +270,17 @@ ngx_http_v2_init(ngx_event_t *rev)
> return;
> }
>
> + if (buf != NULL) {
> + size = buf->last - buf->start;
> +
> + if (size > h2mcf->recv_buffer_size) {
> + size = h2mcf->recv_buffer_size;
> + }
> +
> + ngx_memcpy(h2mcf->recv_buffer, buf->start, size);
> + h2c->state.buffer_used = size;
> + }
> +
> h2c->connection = c;
> h2c->http_connection = hc;
>
> @@ -381,13 +400,16 @@ ngx_http_v2_read_handler(ngx_event_t *rev)
> h2mcf = ngx_http_get_module_main_conf(h2c->http_connection->conf_ctx,
> ngx_http_v2_module);
>
> - available = h2mcf->recv_buffer_size - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;
> + available = h2mcf->recv_buffer_size - h2c->state.buffer_used - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;
>
> do {
> p = h2mcf->recv_buffer;
>
> - ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
> end = p + h2c->state.buffer_used;
> + if (h2c->state.buffer_used == 0) {
> + ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
> + }
> +
>
> n = c->recv(c, end, available);
>
> diff --git a/src/http/v2/ngx_http_v2.h b/src/http/v2/ngx_http_v2.h
> index d89e8fef..7b223b29 100644
> --- a/src/http/v2/ngx_http_v2.h
> +++ b/src/http/v2/ngx_http_v2.h
> @@ -279,6 +279,7 @@ ngx_http_v2_queue_ordered_frame(ngx_http_v2_connection_t *h2c,
>
>
> void ngx_http_v2_init(ngx_event_t *rev);
> +void ngx_http_v2_init_with_buf(ngx_event_t *rev, ngx_buf_t *buf);
>
> ngx_int_t ngx_http_v2_read_request_body(ngx_http_request_t *r);
> ngx_int_t ngx_http_v2_read_unbuffered_request_body(ngx_http_request_t *r);
>
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [nginx] Improved code readablity.

$
0
0
I'm a little bit confused why the diffs are equal to each other,
ASAICS, the changed code removed the effect of 'flags'?

Can you explain a little bit since I just began to read the code base :)

2018-03-07 23:29 GMT+08:00 Ruslan Ermilov <ru@nginx.com>:
> details: http://hg.nginx.org/nginx/rev/0b1eb40de6da
> branches:
> changeset: 7226:0b1eb40de6da
> user: Ruslan Ermilov <ru@nginx.com>
> date: Wed Mar 07 18:28:12 2018 +0300
> description:
> Improved code readablity.
>
> No functional changes.
>
> diffstat:
>
> src/http/ngx_http_variables.c | 8 ++++++--
> src/stream/ngx_stream_variables.c | 8 ++++++--
> 2 files changed, 12 insertions(+), 4 deletions(-)
>
> diffs (50 lines):
>
> diff -r e80930e5e422 -r 0b1eb40de6da src/http/ngx_http_variables.c
> --- a/src/http/ngx_http_variables.c Mon Mar 05 21:35:13 2018 +0300
> +++ b/src/http/ngx_http_variables.c Wed Mar 07 18:28:12 2018 +0300
> @@ -429,7 +429,9 @@ ngx_http_add_variable(ngx_conf_t *cf, ng
> return NULL;
> }
>
> - v->flags &= flags | ~NGX_HTTP_VAR_WEAK;
> + if (!(flags & NGX_HTTP_VAR_WEAK)) {
> + v->flags &= ~NGX_HTTP_VAR_WEAK;
> + }
>
> return v;
> }
> @@ -494,7 +496,9 @@ ngx_http_add_prefix_variable(ngx_conf_t
> return NULL;
> }
>
> - v->flags &= flags | ~NGX_HTTP_VAR_WEAK;
> + if (!(flags & NGX_HTTP_VAR_WEAK)) {
> + v->flags &= ~NGX_HTTP_VAR_WEAK;
> + }
>
> return v;
> }
> diff -r e80930e5e422 -r 0b1eb40de6da src/stream/ngx_stream_variables.c
> --- a/src/stream/ngx_stream_variables.c Mon Mar 05 21:35:13 2018 +0300
> +++ b/src/stream/ngx_stream_variables.c Wed Mar 07 18:28:12 2018 +0300
> @@ -161,7 +161,9 @@ ngx_stream_add_variable(ngx_conf_t *cf,
> return NULL;
> }
>
> - v->flags &= flags | ~NGX_STREAM_VAR_WEAK;
> + if (!(flags & NGX_STREAM_VAR_WEAK)) {
> + v->flags &= ~NGX_STREAM_VAR_WEAK;
> + }
>
> return v;
> }
> @@ -227,7 +229,9 @@ ngx_stream_add_prefix_variable(ngx_conf_
> return NULL;
> }
>
> - v->flags &= flags | ~NGX_STREAM_VAR_WEAK;
> + if (!(flags & NGX_STREAM_VAR_WEAK)) {
> + v->flags &= ~NGX_STREAM_VAR_WEAK;
> + }
>
> return v;
> }
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH] HTTP/2: make http2 server support http1

$
0
0
Sorry for disturbing. But I have to fix a buffer overflow bug.
Here is the latest patch.

Sorry. But please make your comments. Thank you.


diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
index 89cfe77a..c51d8ace 100644
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -17,6 +17,10 @@ static ssize_t ngx_http_read_request_header(ngx_http_request_t *r);
static ngx_int_t ngx_http_alloc_large_header_buffer(ngx_http_request_t *r,
ngx_uint_t request_line);

+#if (NGX_HTTP_V2)
+static void ngx_http_wait_v2_preface_handler(ngx_event_t *rev);
+#endif
+
static ngx_int_t ngx_http_process_header_line(ngx_http_request_t *r,
ngx_table_elt_t *h, ngx_uint_t offset);
static ngx_int_t ngx_http_process_unique_header_line(ngx_http_request_t *r,
@@ -321,7 +325,7 @@ ngx_http_init_connection(ngx_connection_t *c)

#if (NGX_HTTP_V2)
if (hc->addr_conf->http2) {
- rev->handler = ngx_http_v2_init;
+ rev->handler = ngx_http_wait_v2_preface_handler;
}
#endif

@@ -377,6 +381,110 @@ ngx_http_init_connection(ngx_connection_t *c)
}


+#if (NGX_HTTP_V2)
+static void
+ngx_http_wait_v2_preface_handler(ngx_event_t *rev)
+{
+ size_t size;
+ ssize_t n;
+ ngx_buf_t *b;
+ ngx_connection_t *c;
+ static const u_char preface[] = "PRI";
+
+ c = rev->data;
+ size = sizeof(preface) - 1;
+
+ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
+ "http wait h2 preface handler");
+
+ if (rev->timedout) {
+ ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out");
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ if (c->close) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ b = c->buffer;
+
+ if (b == NULL) {
+ b = ngx_create_temp_buf(c->pool, size);
+ if (b == NULL) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ c->buffer = b;
+
+ } else if (b->start == NULL) {
+
+ b->start = ngx_palloc(c->pool, size);
+ if (b->start == NULL) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ b->pos = b->start;
+ b->last = b->start;
+ b->end = b->last + size;
+ }
+
+ n = c->recv(c, b->last, b->end - b->last);
+
+ if (n == NGX_AGAIN) {
+
+ if (!rev->timer_set) {
+ ngx_add_timer(rev, c->listening->post_accept_timeout);
+ ngx_reusable_connection(c, 1);
+ }
+
+ if (ngx_handle_read_event(rev, 0) != NGX_OK) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ /*
+ * We are trying to not hold c->buffer's memory for an idle connection.
+ */
+
+ if (ngx_pfree(c->pool, b->start) == NGX_OK) {
+ b->start = NULL;
+ }
+
+ return;
+ }
+
+ if (n == NGX_ERROR) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ if (n == 0) {
+ ngx_log_error(NGX_LOG_INFO, c->log, 0,
+ "client closed connection");
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ b->last += n;
+
+ if (b->last == b->end) {
+ /* b will be freed in ngx_http_v2_init/ngx_http_wait_request_handler */
+
+ if (ngx_strncmp(b->start, preface, size) == 0) {
+ ngx_http_v2_init(rev);
+ } else {
+ rev->handler = ngx_http_wait_request_handler;
+ ngx_http_wait_request_handler(rev);
+ }
+ }
+}
+#endif
+
+
static void
ngx_http_wait_request_handler(ngx_event_t *rev)
{
@@ -430,6 +538,22 @@ ngx_http_wait_request_handler(ngx_event_t *rev)
b->pos = b->start;
b->last = b->start;
b->end = b->last + size;
+ } else {
+
+ p = ngx_palloc(c->pool, size);
+ if (p == NULL) {
+ ngx_http_close_connection(c);
+ return;
+ }
+
+ n = b->last - b->start;
+ ngx_memcpy(p, b->start, n);
+ ngx_pfree(c->pool, b->start);
+
+ b->start = p;
+ b->pos = b->start;
+ b->last = b->start + n;
+ b->end = b->last + size;
}

n = c->recv(c, b->last, size);
diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c
index d9df0f90..e36bf382 100644
--- a/src/http/v2/ngx_http_v2.c
+++ b/src/http/v2/ngx_http_v2.c
@@ -231,6 +231,8 @@ static ngx_http_v2_parse_header_t ngx_http_v2_parse_headers[] = {
void
ngx_http_v2_init(ngx_event_t *rev)
{
+ size_t size;
+ ngx_buf_t *b;
ngx_connection_t *c;
ngx_pool_cleanup_t *cln;
ngx_http_connection_t *hc;
@@ -262,6 +264,23 @@ ngx_http_v2_init(ngx_event_t *rev)
return;
}

+ b = c->buffer;
+
+ if (b != NULL) {
+ size = b->last - b->start;
+
+ if (size > h2mcf->recv_buffer_size) {
+ size = h2mcf->recv_buffer_size;
+ }
+
+ ngx_memcpy(h2mcf->recv_buffer, b->start, size);
+ h2c->state.buffer_used = size;
+
+ ngx_pfree(c->pool, b->start);
+ ngx_pfree(c->pool, b);
+ c->buffer = NULL;
+ }
+
h2c->connection = c;
h2c->http_connection = hc;

@@ -381,13 +400,15 @@ ngx_http_v2_read_handler(ngx_event_t *rev)
h2mcf = ngx_http_get_module_main_conf(h2c->http_connection->conf_ctx,
ngx_http_v2_module);

- available = h2mcf->recv_buffer_size - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;
+ available = h2mcf->recv_buffer_size - h2c->state.buffer_used - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;

do {
p = h2mcf->recv_buffer;

- ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
end = p + h2c->state.buffer_used;
+ if (h2c->state.buffer_used == 0) {
+ ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
+ }

n = c->recv(c, end, available);




> On Mar 6, 2018, at 03:14, Maxim Dounin <mdounin@mdounin.ru> wrote:
>
> Hello!
>
> On Mon, Mar 05, 2018 at 11:52:57PM +0800, Haitao Lv wrote:
>
> [...]
>
>>> Overall, the patch looks like a hack and introduces too much
>>> complexity for this feature. While I understand the reasoning,
>>> the proposed implementation cannot be accepted.
>>
>> Could you clarify that whether is this feature not accepted or this patch?
>>
>> If this feature is not needed, I will terminate this thread.
>>
>> If this patch only looks like a hack, would you like offer any advice to write
>> code with good smell?
>
> We've previously discussed this with Valentin, and our position is
> as follows:
>
> - The feature itself (autodetection between HTTP/2 and HTTP/1.x
> protocols) might be usable, and we can consider adding it if
> there will be a good and simple enough patch. (Moreover, we
> think that this probably should be the default if "listen ...
> http2" is configured - that is, no "http1" option.)
>
> - The patch suggested certainly doesn't meet the above criteria,
> and it does not look like it can be fixed.
>
> We don't know if a good and simple enough implementation is at all
> possible though. One of the possible approaches was already
> proposed by Valentin (detect HTTP/2 or HTTP/1.x before starting
> processing, may be similar to how we handle http-to-https
> requests), but it's now immediately clear if it will work or not.
> Sorry, but please don't expect any of us to provide further
> guidance.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel



_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [nginx] Improved code readablity.

$
0
0
On Thu, Mar 08, 2018 at 08:42:06AM +0800, Junwang Zhao wrote:
> I'm a little bit confused why the diffs are equal to each other,
> ASAICS, the changed code removed the effect of 'flags'?
>
> Can you explain a little bit since I just began to read the code base :)

"flags | ~NGX_HTTP_VAR_WEAK" would either evaluate to "all bits set"
or "all bits set except NGX_HTTP_VAR_WEAK" if this bit is not set in
"flags".

> 2018-03-07 23:29 GMT+08:00 Ruslan Ermilov <ru@nginx.com>:
> > details: http://hg.nginx.org/nginx/rev/0b1eb40de6da
> > branches:
> > changeset: 7226:0b1eb40de6da
> > user: Ruslan Ermilov <ru@nginx.com>
> > date: Wed Mar 07 18:28:12 2018 +0300
> > description:
> > Improved code readablity.
> >
> > No functional changes.
> >
> > diffstat:
> >
> > src/http/ngx_http_variables.c | 8 ++++++--
> > src/stream/ngx_stream_variables.c | 8 ++++++--
> > 2 files changed, 12 insertions(+), 4 deletions(-)
> >
> > diffs (50 lines):
> >
> > diff -r e80930e5e422 -r 0b1eb40de6da src/http/ngx_http_variables.c
> > --- a/src/http/ngx_http_variables.c Mon Mar 05 21:35:13 2018 +0300
> > +++ b/src/http/ngx_http_variables.c Wed Mar 07 18:28:12 2018 +0300
> > @@ -429,7 +429,9 @@ ngx_http_add_variable(ngx_conf_t *cf, ng
> > return NULL;
> > }
> >
> > - v->flags &= flags | ~NGX_HTTP_VAR_WEAK;
> > + if (!(flags & NGX_HTTP_VAR_WEAK)) {
> > + v->flags &= ~NGX_HTTP_VAR_WEAK;
> > + }
> >
> > return v;
> > }
> > @@ -494,7 +496,9 @@ ngx_http_add_prefix_variable(ngx_conf_t
> > return NULL;
> > }
> >
> > - v->flags &= flags | ~NGX_HTTP_VAR_WEAK;
> > + if (!(flags & NGX_HTTP_VAR_WEAK)) {
> > + v->flags &= ~NGX_HTTP_VAR_WEAK;
> > + }
> >
> > return v;
> > }
> > diff -r e80930e5e422 -r 0b1eb40de6da src/stream/ngx_stream_variables.c
> > --- a/src/stream/ngx_stream_variables.c Mon Mar 05 21:35:13 2018 +0300
> > +++ b/src/stream/ngx_stream_variables.c Wed Mar 07 18:28:12 2018 +0300
> > @@ -161,7 +161,9 @@ ngx_stream_add_variable(ngx_conf_t *cf,
> > return NULL;
> > }
> >
> > - v->flags &= flags | ~NGX_STREAM_VAR_WEAK;
> > + if (!(flags & NGX_STREAM_VAR_WEAK)) {
> > + v->flags &= ~NGX_STREAM_VAR_WEAK;
> > + }
> >
> > return v;
> > }
> > @@ -227,7 +229,9 @@ ngx_stream_add_prefix_variable(ngx_conf_
> > return NULL;
> > }
> >
> > - v->flags &= flags | ~NGX_STREAM_VAR_WEAK;
> > + if (!(flags & NGX_STREAM_VAR_WEAK)) {
> > + v->flags &= ~NGX_STREAM_VAR_WEAK;
> > + }
> >
> > return v;
> > }
> > _______________________________________________
> > nginx-devel mailing list
> > nginx-devel@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>

--
Ruslan Ermilov
Assume stupidity not malice
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [nginx] Improved code readablity.

$
0
0
Got it, thanks for your reply :)

On Thu, Mar 8, 2018 at 1:13 PM, Ruslan Ermilov <ru@nginx.com> wrote:
> On Thu, Mar 08, 2018 at 08:42:06AM +0800, Junwang Zhao wrote:
>> I'm a little bit confused why the diffs are equal to each other,
>> ASAICS, the changed code removed the effect of 'flags'?
>>
>> Can you explain a little bit since I just began to read the code base :)
>
> "flags | ~NGX_HTTP_VAR_WEAK" would either evaluate to "all bits set"
> or "all bits set except NGX_HTTP_VAR_WEAK" if this bit is not set in
> "flags".
>
>> 2018-03-07 23:29 GMT+08:00 Ruslan Ermilov <ru@nginx.com>:
>> > details: http://hg.nginx.org/nginx/rev/0b1eb40de6da
>> > branches:
>> > changeset: 7226:0b1eb40de6da
>> > user: Ruslan Ermilov <ru@nginx.com>
>> > date: Wed Mar 07 18:28:12 2018 +0300
>> > description:
>> > Improved code readablity.
>> >
>> > No functional changes.
>> >
>> > diffstat:
>> >
>> > src/http/ngx_http_variables.c | 8 ++++++--
>> > src/stream/ngx_stream_variables.c | 8 ++++++--
>> > 2 files changed, 12 insertions(+), 4 deletions(-)
>> >
>> > diffs (50 lines):
>> >
>> > diff -r e80930e5e422 -r 0b1eb40de6da src/http/ngx_http_variables.c
>> > --- a/src/http/ngx_http_variables.c Mon Mar 05 21:35:13 2018 +0300
>> > +++ b/src/http/ngx_http_variables.c Wed Mar 07 18:28:12 2018 +0300
>> > @@ -429,7 +429,9 @@ ngx_http_add_variable(ngx_conf_t *cf, ng
>> > return NULL;
>> > }
>> >
>> > - v->flags &= flags | ~NGX_HTTP_VAR_WEAK;
>> > + if (!(flags & NGX_HTTP_VAR_WEAK)) {
>> > + v->flags &= ~NGX_HTTP_VAR_WEAK;
>> > + }
>> >
>> > return v;
>> > }
>> > @@ -494,7 +496,9 @@ ngx_http_add_prefix_variable(ngx_conf_t
>> > return NULL;
>> > }
>> >
>> > - v->flags &= flags | ~NGX_HTTP_VAR_WEAK;
>> > + if (!(flags & NGX_HTTP_VAR_WEAK)) {
>> > + v->flags &= ~NGX_HTTP_VAR_WEAK;
>> > + }
>> >
>> > return v;
>> > }
>> > diff -r e80930e5e422 -r 0b1eb40de6da src/stream/ngx_stream_variables.c
>> > --- a/src/stream/ngx_stream_variables.c Mon Mar 05 21:35:13 2018 +0300
>> > +++ b/src/stream/ngx_stream_variables.c Wed Mar 07 18:28:12 2018 +0300
>> > @@ -161,7 +161,9 @@ ngx_stream_add_variable(ngx_conf_t *cf,
>> > return NULL;
>> > }
>> >
>> > - v->flags &= flags | ~NGX_STREAM_VAR_WEAK;
>> > + if (!(flags & NGX_STREAM_VAR_WEAK)) {
>> > + v->flags &= ~NGX_STREAM_VAR_WEAK;
>> > + }
>> >
>> > return v;
>> > }
>> > @@ -227,7 +229,9 @@ ngx_stream_add_prefix_variable(ngx_conf_
>> > return NULL;
>> > }
>> >
>> > - v->flags &= flags | ~NGX_STREAM_VAR_WEAK;
>> > + if (!(flags & NGX_STREAM_VAR_WEAK)) {
>> > + v->flags &= ~NGX_STREAM_VAR_WEAK;
>> > + }
>> >
>> > return v;
>> > }
>> > _______________________________________________
>> > nginx-devel mailing list
>> > nginx-devel@nginx.org
>> > http://mailman.nginx.org/mailman/listinfo/nginx-devel
>> _______________________________________________
>> nginx-devel mailing list
>> nginx-devel@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-devel
>>
>
> --
> Ruslan Ermilov
> Assume stupidity not malice
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Make nginx ignore unresolvable upstream server host names during reload or boot up

$
0
0
Hi,

I have multiple upstream servers configured in an upstream block in my nginx configuration.

upstream example2 {
server example2.service.example.com:8001;
server example1.service.example.com:8002;
}

server {
listen 80;
server_name example2.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://example2/;
}
}

When i try to reload Nginx and at that time if one of my upstream servers (say example2.service.example.com) is not DNS resolvable, then the reload fails with an error "host not found in upstream".

Is there any way we can ask nginx to ignore such unresolvable host names or rather configure Nginx to resolve these upstream server host names at run time instead of resolving it during the boot up or reload process?

Re: location blocks, and if conditions in server context

$
0
0
On Wed, Mar 07, 2018 at 04:55:15PM +0000, Lucas Rolff wrote:

Hi there,

> This means I have something like:
>
> 1: location ~* /.well-known
> 2: if condition doing redirect if protocol is http
> 3: location /
> 4: location /api
> 5: location /test
>
> All my templates include 1 to 3, and *might* have additional locations.

> My issue is – because of this if condition that does the redirect to https – it also applies to my location ~* /.well-known – thus causing a redirect, and I want to prevent this, since it breaks the Let’s Encrypt validation (they do not accept 301 redirects).

> Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ?

As phrased, I think the short answer to your question is "no".

However...

You optionally redirect things from http to https. Is that "you want
to redirect *everything* from http to https, apart from the letsencrypt
thing"? If so, you could potentially have just one

server {
listen 80;
location / { return 301 https://$host$uri; }
location /.well-known/ { proxy_pass http://letsencrypt.validation.backend.com; }
}

and a bunch of

server {
listen 443;
}

blocks.

Or: you use $sslproxy_protocol. Where does that come from?

If it is a thing that you create to decide whether or not to redirect
to https, then could you include a check for whether the request starts
with /.well-known/, and if so set it to something other than "http"?

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: location blocks, and if conditions in server context

$
0
0
Hi Francis,

I indeed thought about having a separate server {} block in case there’s the http to https redirect for a specific domain.
Since it depends on the domain, I can’t make a general one to match everything.

> Or: you use $sslproxy_protocol. Where does that come from?

$sslproxy_protocol is a simple map doing:

map $https $sslproxy _protocol {
default "http";
SSL "https";
on "https";
}

Best Regards,
Lucas Rolff

On 08/03/2018, 09.44, "nginx on behalf of Francis Daly" <nginx-bounces@nginx.org on behalf of francis@daoine.org> wrote:

On Wed, Mar 07, 2018 at 04:55:15PM +0000, Lucas Rolff wrote:

Hi there,

> This means I have something like:
>
> 1: location ~* /.well-known
> 2: if condition doing redirect if protocol is http
> 3: location /
> 4: location /api
> 5: location /test
>
> All my templates include 1 to 3, and *might* have additional locations.

> My issue is – because of this if condition that does the redirect to https – it also applies to my location ~* /.well-known – thus causing a redirect, and I want to prevent this, since it breaks the Let’s Encrypt validation (they do not accept 301 redirects).

> Is there a smart way without adding too much complexity, which is still super-fast (I know if is evil) ?

As phrased, I think the short answer to your question is "no".

However...

You optionally redirect things from http to https. Is that "you want
to redirect *everything* from http to https, apart from the letsencrypt
thing"? If so, you could potentially have just one

server {
listen 80;
location / { return 301 https://$host$uri; }
location /.well-known/ { proxy_pass http://letsencrypt.validation.backend.com; }
}

and a bunch of

server {
listen 443;
}

blocks.

Or: you use $sslproxy_protocol. Where does that come from?

If it is a thing that you create to decide whether or not to redirect
to https, then could you include a check for whether the request starts
with /.well-known/, and if so set it to something other than "http"?

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: newbie: nginx rtmp module

$
0
0
thankyou for that.

------------------------
# nginx -V
nginx version: nginx/1.10.2
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=' -Wl,-E'
----------------------------

hmmm...can't see rtmp in there anywhere.
I'm running nginx on a centos 6 vps and its taken me ages to get the system up and running properly, so it's a case of 'if it ain't broke don't fix it'. Although you've got me thinking now...

Make nginx ignore unresolvable upstream server host names during reload or boot up

$
0
0
Hi,

I have multiple upstream servers configured in an upstream block in my nginx configuration.

upstream example2 {
server example2.service.example.com:8001;
server example1.service.example.com:8002;
}

server {
listen 80;
server_name example2.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://example2/;
}
}

When i try to reload Nginx and at that time if one of my upstream servers (say example2.service.example.com) is not DNS resolvable, then the reload fails with an error "host not found in upstream".

Is there any way we can ask nginx to ignore such unresolvable host names or rather configure Nginx to resolve these upstream server host names at run time instead of resolving it during the boot up or reload process?

ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session

$
0
0
Using NGINX 1.12.2 on MIPS (haven't tested on x86), if I set:

ssl_session_cache shared:SSL:1m; # it also fails with 10m


And the client reestablishes the connection, it
gets: net::ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session.

Has anyone seen anything like this?


More detail:

This was tested on 1.12.2, on a MIPS CPU, using OpenSSL 1.0.2j, and built
by gcc 4.8.3 (OpenWrt/Linaro GCC 4.8-2014.04 r47070).

Interesting portion of my configuration file:

server {
listen 443 ssl;

ssl_certificate /etc/ssl/certs/bridge.cert.pem;
ssl_certificate_key /etc/ssl/private/bridge.key.pem;

ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256;
ssl_ecdh_curve prime256v1;

ssl_session_timeout 24h;
ssl_session_tickets on;
ssl_session_cache shared:SSL:1m; # set to 10m, still fails, remove, the
problem seems to disappear

keepalive_timeout 1s; # reduced during troubleshooting to make it
trigger easily
keepalive_requests 1; # reduced during troubleshooting to make it
trigger easily

include apiv1.conf; # where all the location rules are
}
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx + php-fpm: REQUEST_URI disappears for files that end with .php

$
0
0
Hello,

I have following nginx + php-fpm configuration but for some reasons
files that end with .php miss REQUEST_URI when they arrive to php-fpm.

For instance:

https://n.example.com/audio/radio/ -> array(1)
{ ["REQUEST_URI"]=> string(15) "/audio/radio/" }
https://n.example.com/rus_example.html -> array(1)
{ ["REQUEST_URI"]=> string(15) "rus_example.html" }
https://n.example.com/rus_example.php -> array(0) { }


What is wrong?
Thank you!

Here is my configuration:

location / {
try_files $uri $uri/ @netcat-rewrite;
}

location @netcat-rewrite {
rewrite ^/(.*)$ /netcat/require/e404.php?REQUEST_URI=$1 last;
}

error_page 404 = /netcat/require/e404.php;

location ~ \.php$ {
if ($args ~ "netcat_files/") {
expires 7d;
add_header Cache-Control "public";
}

fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root
$fastcgi_script_name;
fastcgi_param DOCUMENT_URI $document_uri;
include fastcgi_params;
}


PHP-FPM log:

no .php file:

08/Мар/2018:13:44:13 +0200
"GET /netcat/require/e404.php?REQUEST_URI=audio/radio/" 200

.php file:

08/Мар/2018:13:44:14 +0200 "GET /netcat/require/e404.php" 404

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>