Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Re: [PATCH 1 of 3] HTTP: add support for trailers in HTTP responses

$
0
0
Hey Maxim,

> I see two problems here:
>
> a. There may be use cases when forcing chunked encoding is not
> desired, but emitting trailers if it is used still makes sense.

Like what, exactly?

Also, gzip module forces chunked encoding and it works just fine. I
don't see why are you making this such a big deal out of this.

> b. Nothing stops modules from changing r->expect_trailers when the
> response header was already sent and it is already too late to
> switch to chunked transfer encoding. Moreover, this will
> naturally happen with any module which is simply following the
> requirement to set r->expect_trailers to 1 as in your commit log.

Same is true for majority of ngx_http_request_t fields, i.e. bad
things can happen if some module misuses them.

> So (a) makes (2) excessively limiting, and (b) makes it useless.

I disagree. Removing this check results in less consistent behavior
and doesn't solve any real problems.

Having said that, I'm going to remove it, since I don't want to spend
another few months arguing about this...

Best regards,
Piotr Sikora
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH 1 of 3] Added support for trailers in HTTP responses

$
0
0
Hey Maxim,

> Note: the "TE: trailers" requirement is no longer present in the
> code.

Good catch, thanks!

> This code results in using chunked encoding for HTTP/1.0 when
> trailers are expected. Such behaviour is explicitly forbidden by
> the HTTP/1.1 specification, and will very likely result in
> problems (we've seen lots of such problems with broken backends
> when there were no HTTP/1.1 support in the proxy module).

Oops, this regression is a result of removal of r->accept_trailers,
which previously disallowed trailers in HTTP/1.0 requests.

> Something like this should be a better solution:
>
> if (r->headers_out.content_length_n == -1
> || r->expect_trailers)
> {
> clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
>
> if (r->http_version >= NGX_HTTP_VERSION_11
> && clcf->chunked_transfer_encoding)
> {
> if (r->expect_trailers) {
> ngx_http_clear_content_length(r);
> }
>
> r->chunked = 1;
>
> ctx = ngx_pcalloc(r->pool,
> sizeof(ngx_http_chunked_filter_ctx_t));
> if (ctx == NULL) {
> return NGX_ERROR;
> }
>
> ngx_http_set_ctx(r, ctx, ngx_http_chunked_filter_module);
>
> } else if (r->headers_out.content_length_n == -1) {
> r->keepalive = 0;
> }
> }

Applied with small style changes.

> Instead of providing two separate code paths for "with trailer
> headers" and "without trailer headers", it might be better and
> more readable to generate last-chunk in one function regardless of
> whether trailer headers are present or not.
>
> It will also make error handling better: as of now, an allocation
> error in ngx_http_chunked_create_trailers() will result in "no
> trailers" code path being tried instead of returning an
> unconditional error.

Done.

> There is no need to write sizeof() so many times, just
>
> len += sizeof(CRLF "0" CRLF CRLF) - 1;
>
> would be enough.

Done.

Best regards,
Piotr Sikora
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[PATCH 1 of 3] Added support for trailers in HTTP responses

$
0
0
# HG changeset patch
# User Piotr Sikora <piotrsikora@google.com>
# Date 1490351854 25200
# Fri Mar 24 03:37:34 2017 -0700
# Node ID 41c09a2fd90410e25ad8515793bd48028001c954
# Parent 716852cce9136d977b81a2d1b8b6f9fbca0dce49
Added support for trailers in HTTP responses.

Example:

ngx_table_elt_t *h;

h = ngx_list_push(&r->headers_out.trailers);
if (h == NULL) {
return NGX_ERROR;
}

ngx_str_set(&h->key, "Fun");
ngx_str_set(&h->value, "with trailers");
h->hash = ngx_hash_key_lc(h->key.data, h->key.len);

The code above adds "Fun: with trailers" trailer to the response.

Modules that want to emit trailers must set r->expect_trailers = 1
in header filter, otherwise they might not be emitted for HTTP/1.1
responses that aren't already chunked.

This change also adds $sent_trailer_* variables.

Signed-off-by: Piotr Sikora <piotrsikora@google.com>

diff -r 716852cce913 -r 41c09a2fd904 src/http/modules/ngx_http_chunked_filter_module.c
--- a/src/http/modules/ngx_http_chunked_filter_module.c
+++ b/src/http/modules/ngx_http_chunked_filter_module.c
@@ -17,6 +17,7 @@ typedef struct {


static ngx_int_t ngx_http_chunked_filter_init(ngx_conf_t *cf);
+static ngx_chain_t *ngx_http_chunked_create_trailers(ngx_http_request_t *r);


static ngx_http_module_t ngx_http_chunked_filter_module_ctx = {
@@ -69,27 +70,28 @@ ngx_http_chunked_header_filter(ngx_http_
return ngx_http_next_header_filter(r);
}

- if (r->headers_out.content_length_n == -1) {
- if (r->http_version < NGX_HTTP_VERSION_11) {
+ if (r->headers_out.content_length_n == -1 || r->expect_trailers) {
+
+ clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
+
+ if (r->http_version >= NGX_HTTP_VERSION_11
+ && clcf->chunked_transfer_encoding)
+ {
+ if (r->expect_trailers) {
+ ngx_http_clear_content_length(r);
+ }
+
+ r->chunked = 1;
+
+ ctx = ngx_pcalloc(r->pool, sizeof(ngx_http_chunked_filter_ctx_t));
+ if (ctx == NULL) {
+ return NGX_ERROR;
+ }
+
+ ngx_http_set_ctx(r, ctx, ngx_http_chunked_filter_module);
+
+ } else if (r->headers_out.content_length_n == -1) {
r->keepalive = 0;
-
- } else {
- clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
-
- if (clcf->chunked_transfer_encoding) {
- r->chunked = 1;
-
- ctx = ngx_pcalloc(r->pool,
- sizeof(ngx_http_chunked_filter_ctx_t));
- if (ctx == NULL) {
- return NGX_ERROR;
- }
-
- ngx_http_set_ctx(r, ctx, ngx_http_chunked_filter_module);
-
- } else {
- r->keepalive = 0;
- }
}
}

@@ -179,26 +181,17 @@ ngx_http_chunked_body_filter(ngx_http_re
}

if (cl->buf->last_buf) {
- tl = ngx_chain_get_free_buf(r->pool, &ctx->free);
+ tl = ngx_http_chunked_create_trailers(r);
if (tl == NULL) {
return NGX_ERROR;
}

- b = tl->buf;
-
- b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
- b->temporary = 0;
- b->memory = 1;
- b->last_buf = 1;
- b->pos = (u_char *) CRLF "0" CRLF CRLF;
- b->last = b->pos + 7;
-
cl->buf->last_buf = 0;

*ll = tl;

if (size == 0) {
- b->pos += 2;
+ tl->buf->pos += 2;
}

} else if (size > 0) {
@@ -230,6 +223,105 @@ ngx_http_chunked_body_filter(ngx_http_re
}


+static ngx_chain_t *
+ngx_http_chunked_create_trailers(ngx_http_request_t *r)
+{
+ size_t len;
+ ngx_buf_t *b;
+ ngx_uint_t i;
+ ngx_chain_t *cl;
+ ngx_list_part_t *part;
+ ngx_table_elt_t *header;
+ ngx_http_chunked_filter_ctx_t *ctx;
+
+ len = sizeof(CRLF "0" CRLF CRLF) - 1;
+
+ part = &r->headers_out.trailers.part;
+ header = part->elts;
+
+ for (i = 0; /* void */; i++) {
+
+ if (i >= part->nelts) {
+ if (part->next == NULL) {
+ break;
+ }
+
+ part = part->next;
+ header = part->elts;
+ i = 0;
+ }
+
+ if (header[i].hash == 0) {
+ continue;
+ }
+
+ len += header[i].key.len + sizeof(": ") - 1
+ + header[i].value.len + sizeof(CRLF) - 1;
+ }
+
+ ctx = ngx_http_get_module_ctx(r, ngx_http_chunked_filter_module);
+
+ cl = ngx_chain_get_free_buf(r->pool, &ctx->free);
+ if (cl == NULL) {
+ return NULL;
+ }
+
+ b = cl->buf;
+
+ b->tag = (ngx_buf_tag_t) &ngx_http_chunked_filter_module;
+ b->temporary = 0;
+ b->memory = 1;
+ b->last_buf = 1;
+
+ b->start = ngx_palloc(r->pool, len);
+ if (b->start == NULL) {
+ return NULL;
+ }
+
+ b->end = b->last + len;
+ b->pos = b->start;
+ b->last = b->start;
+
+ *b->last++ = CR; *b->last++ = LF;
+ *b->last++ = '0';
+ *b->last++ = CR; *b->last++ = LF;
+
+ part = &r->headers_out.trailers.part;
+ header = part->elts;
+
+ for (i = 0; /* void */; i++) {
+
+ if (i >= part->nelts) {
+ if (part->next == NULL) {
+ break;
+ }
+
+ part = part->next;
+ header = part->elts;
+ i = 0;
+ }
+
+ if (header[i].hash == 0) {
+ continue;
+ }
+
+ ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+ "http trailer: \"%V: %V\"",
+ &header[i].key, &header[i].value);
+
+ b->last = ngx_copy(b->last, header[i].key.data, header[i].key.len);
+ *b->last++ = ':'; *b->last++ = ' ';
+
+ b->last = ngx_copy(b->last, header[i].value.data, header[i].value.len);
+ *b->last++ = CR; *b->last++ = LF;
+ }
+
+ *b->last++ = CR; *b->last++ = LF;
+
+ return cl;
+}
+
+
static ngx_int_t
ngx_http_chunked_filter_init(ngx_conf_t *cf)
{
diff -r 716852cce913 -r 41c09a2fd904 src/http/ngx_http_core_module.c
--- a/src/http/ngx_http_core_module.c
+++ b/src/http/ngx_http_core_module.c
@@ -2484,6 +2484,13 @@ ngx_http_subrequest(ngx_http_request_t *
return NGX_ERROR;
}

+ if (ngx_list_init(&sr->headers_out.trailers, r->pool, 4,
+ sizeof(ngx_table_elt_t))
+ != NGX_OK)
+ {
+ return NGX_ERROR;
+ }
+
cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module);
sr->main_conf = cscf->ctx->main_conf;
sr->srv_conf = cscf->ctx->srv_conf;
diff -r 716852cce913 -r 41c09a2fd904 src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -562,6 +562,14 @@ ngx_http_create_request(ngx_connection_t
return NULL;
}

+ if (ngx_list_init(&r->headers_out.trailers, r->pool, 4,
+ sizeof(ngx_table_elt_t))
+ != NGX_OK)
+ {
+ ngx_destroy_pool(r->pool);
+ return NULL;
+ }
+
r->ctx = ngx_pcalloc(r->pool, sizeof(void *) * ngx_http_max_module);
if (r->ctx == NULL) {
ngx_destroy_pool(r->pool);
diff -r 716852cce913 -r 41c09a2fd904 src/http/ngx_http_request.h
--- a/src/http/ngx_http_request.h
+++ b/src/http/ngx_http_request.h
@@ -252,6 +252,7 @@ typedef struct {

typedef struct {
ngx_list_t headers;
+ ngx_list_t trailers;

ngx_uint_t status;
ngx_str_t status_line;
@@ -514,6 +515,7 @@ struct ngx_http_request_s {
unsigned pipeline:1;
unsigned chunked:1;
unsigned header_only:1;
+ unsigned expect_trailers:1;
unsigned keepalive:1;
unsigned lingering_close:1;
unsigned discard_body:1;
diff -r 716852cce913 -r 41c09a2fd904 src/http/ngx_http_variables.c
--- a/src/http/ngx_http_variables.c
+++ b/src/http/ngx_http_variables.c
@@ -38,6 +38,8 @@ static ngx_int_t ngx_http_variable_unkno
ngx_http_variable_value_t *v, uintptr_t data);
static ngx_int_t ngx_http_variable_unknown_header_out(ngx_http_request_t *r,
ngx_http_variable_value_t *v, uintptr_t data);
+static ngx_int_t ngx_http_variable_unknown_trailer_out(ngx_http_request_t *r,
+ ngx_http_variable_value_t *v, uintptr_t data);
static ngx_int_t ngx_http_variable_request_line(ngx_http_request_t *r,
ngx_http_variable_value_t *v, uintptr_t data);
static ngx_int_t ngx_http_variable_cookie(ngx_http_request_t *r,
@@ -365,6 +367,9 @@ static ngx_http_variable_t ngx_http_cor
{ ngx_string("sent_http_"), NULL, ngx_http_variable_unknown_header_out,
0, NGX_HTTP_VAR_PREFIX, 0 },

+ { ngx_string("sent_trailer_"), NULL, ngx_http_variable_unknown_trailer_out,
+ 0, NGX_HTTP_VAR_PREFIX, 0 },
+
{ ngx_string("cookie_"), NULL, ngx_http_variable_cookie,
0, NGX_HTTP_VAR_PREFIX, 0 },

@@ -934,6 +939,16 @@ ngx_http_variable_unknown_header_out(ngx
}


+static ngx_int_t
+ngx_http_variable_unknown_trailer_out(ngx_http_request_t *r,
+ ngx_http_variable_value_t *v, uintptr_t data)
+{
+ return ngx_http_variable_unknown_header(v, (ngx_str_t *) data,
+ &r->headers_out.trailers.part,
+ sizeof("sent_trailer_") - 1);
+}
+
+
ngx_int_t
ngx_http_variable_unknown_header(ngx_http_variable_value_t *v, ngx_str_t *var,
ngx_list_part_t *part, size_t prefix)
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[PATCH 2 of 3] HTTP/2: added support for trailers in HTTP responses

$
0
0
# HG changeset patch
# User Piotr Sikora <piotrsikora@google.com>
# Date 1493191954 25200
# Wed Apr 26 00:32:34 2017 -0700
# Node ID 8d74ff6c2015180f5c1f399f492214d7d0a52b3f
# Parent 41c09a2fd90410e25ad8515793bd48028001c954
HTTP/2: added support for trailers in HTTP responses.

Signed-off-by: Piotr Sikora <piotrsikora@google.com>

diff -r 41c09a2fd904 -r 8d74ff6c2015 src/http/v2/ngx_http_v2_filter_module.c
--- a/src/http/v2/ngx_http_v2_filter_module.c
+++ b/src/http/v2/ngx_http_v2_filter_module.c
@@ -50,13 +50,17 @@
#define NGX_HTTP_V2_SERVER_INDEX 54
#define NGX_HTTP_V2_VARY_INDEX 59

+#define NGX_HTTP_V2_FRAME_ERROR (ngx_http_v2_out_frame_t *) -1
+

static u_char *ngx_http_v2_string_encode(u_char *dst, u_char *src, size_t len,
u_char *tmp, ngx_uint_t lower);
static u_char *ngx_http_v2_write_int(u_char *pos, ngx_uint_t prefix,
ngx_uint_t value);
static ngx_http_v2_out_frame_t *ngx_http_v2_create_headers_frame(
- ngx_http_request_t *r, u_char *pos, u_char *end);
+ ngx_http_request_t *r, u_char *pos, u_char *end, ngx_uint_t fin);
+static ngx_http_v2_out_frame_t *ngx_http_v2_create_trailers_frame(
+ ngx_http_request_t *r);

static ngx_chain_t *ngx_http_v2_send_chain(ngx_connection_t *fc,
ngx_chain_t *in, off_t limit);
@@ -612,7 +616,7 @@ ngx_http_v2_header_filter(ngx_http_reque
header[i].value.len, tmp);
}

- frame = ngx_http_v2_create_headers_frame(r, start, pos);
+ frame = ngx_http_v2_create_headers_frame(r, start, pos, r->header_only);
if (frame == NULL) {
return NGX_ERROR;
}
@@ -636,6 +640,126 @@ ngx_http_v2_header_filter(ngx_http_reque
}


+static ngx_http_v2_out_frame_t *
+ngx_http_v2_create_trailers_frame(ngx_http_request_t *r)
+{
+ u_char *pos, *start, *tmp;
+ size_t len, tmp_len;
+ ngx_uint_t i;
+ ngx_list_part_t *part;
+ ngx_table_elt_t *header;
+ ngx_http_v2_out_frame_t *frame;
+
+ len = 0;
+ tmp_len = 0;
+
+ part = &r->headers_out.trailers.part;
+ header = part->elts;
+
+ for (i = 0; /* void */; i++) {
+
+ if (i >= part->nelts) {
+ if (part->next == NULL) {
+ break;
+ }
+
+ part = part->next;
+ header = part->elts;
+ i = 0;
+ }
+
+ if (header[i].hash == 0) {
+ continue;
+ }
+
+ if (header[i].key.len > NGX_HTTP_V2_MAX_FIELD) {
+ ngx_log_error(NGX_LOG_WARN, r->connection->log, 0,
+ "too long response trailer name: \"%V\"",
+ &header[i].key);
+
+ return NGX_HTTP_V2_FRAME_ERROR;
+ }
+
+ if (header[i].value.len > NGX_HTTP_V2_MAX_FIELD) {
+ ngx_log_error(NGX_LOG_WARN, r->connection->log, 0,
+ "too long response trailer value: \"%V: %V\"",
+ &header[i].key, &header[i].value);
+
+ return NGX_HTTP_V2_FRAME_ERROR;
+ }
+
+ len += 1 + NGX_HTTP_V2_INT_OCTETS + header[i].key.len
+ + NGX_HTTP_V2_INT_OCTETS + header[i].value.len;
+
+ if (header[i].key.len > tmp_len) {
+ tmp_len = header[i].key.len;
+ }
+
+ if (header[i].value.len > tmp_len) {
+ tmp_len = header[i].value.len;
+ }
+ }
+
+ if (len == 0) {
+ return NULL;
+ }
+
+ tmp = ngx_palloc(r->pool, tmp_len);
+ pos = ngx_pnalloc(r->pool, len);
+
+ if (pos == NULL || tmp == NULL) {
+ return NGX_HTTP_V2_FRAME_ERROR;
+ }
+
+ start = pos;
+
+ part = &r->headers_out.trailers.part;
+ header = part->elts;
+
+ for (i = 0; /* void */; i++) {
+
+ if (i >= part->nelts) {
+ if (part->next == NULL) {
+ break;
+ }
+
+ part = part->next;
+ header = part->elts;
+ i = 0;
+ }
+
+ if (header[i].hash == 0) {
+ continue;
+ }
+
+#if (NGX_DEBUG)
+ if (r->connection->log->log_level & NGX_LOG_DEBUG_HTTP) {
+ ngx_strlow(tmp, header[i].key.data, header[i].key.len);
+
+ ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+ "http2 output trailer: \"%*s: %V\"",
+ header[i].key.len, tmp, &header[i].value);
+ }
+#endif
+
+ *pos++ = 0;
+
+ pos = ngx_http_v2_write_name(pos, header[i].key.data,
+ header[i].key.len, tmp);
+
+ pos = ngx_http_v2_write_value(pos, header[i].value.data,
+ header[i].value.len, tmp);
+ }
+
+ frame = ngx_http_v2_create_headers_frame(r, start, pos, 1);
+ if (frame == NULL) {
+ return NGX_HTTP_V2_FRAME_ERROR;
+ }
+
+ return frame;
+}
+
+
static u_char *
ngx_http_v2_string_encode(u_char *dst, u_char *src, size_t len, u_char *tmp,
ngx_uint_t lower)
@@ -686,7 +810,7 @@ ngx_http_v2_write_int(u_char *pos, ngx_u

static ngx_http_v2_out_frame_t *
ngx_http_v2_create_headers_frame(ngx_http_request_t *r, u_char *pos,
- u_char *end)
+ u_char *end, ngx_uint_t fin)
{
u_char type, flags;
size_t rest, frame_size;
@@ -707,12 +831,12 @@ ngx_http_v2_create_headers_frame(ngx_htt
frame->stream = stream;
frame->length = rest;
frame->blocked = 1;
- frame->fin = r->header_only;
+ frame->fin = fin;

ll = &frame->first;

type = NGX_HTTP_V2_HEADERS_FRAME;
- flags = r->header_only ? NGX_HTTP_V2_END_STREAM_FLAG : NGX_HTTP_V2_NO_FLAG;
+ flags = fin ? NGX_HTTP_V2_END_STREAM_FLAG : NGX_HTTP_V2_NO_FLAG;
frame_size = stream->connection->frame_size;

for ( ;; ) {
@@ -776,7 +900,7 @@ ngx_http_v2_create_headers_frame(ngx_htt
continue;
}

- b->last_buf = r->header_only;
+ b->last_buf = fin;
cl->next = NULL;
frame->last = cl;

@@ -798,7 +922,7 @@ ngx_http_v2_send_chain(ngx_connection_t
ngx_http_request_t *r;
ngx_http_v2_stream_t *stream;
ngx_http_v2_loc_conf_t *h2lcf;
- ngx_http_v2_out_frame_t *frame;
+ ngx_http_v2_out_frame_t *frame, *trailers;
ngx_http_v2_connection_t *h2c;

r = fc->data;
@@ -872,6 +996,8 @@ ngx_http_v2_send_chain(ngx_connection_t
frame_size = (h2lcf->chunk_size < h2c->frame_size)
? h2lcf->chunk_size : h2c->frame_size;

+ trailers = NULL;
+
#if (NGX_SUPPRESS_WARN)
cl = NULL;
#endif
@@ -934,17 +1060,36 @@ ngx_http_v2_send_chain(ngx_connection_t
size -= rest;
}

- frame = ngx_http_v2_filter_get_data_frame(stream, frame_size, out, cl);
- if (frame == NULL) {
- return NGX_CHAIN_ERROR;
+ if (cl->buf->last_buf) {
+ trailers = ngx_http_v2_create_trailers_frame(r);
+ if (trailers == NGX_HTTP_V2_FRAME_ERROR) {
+ return NGX_CHAIN_ERROR;
+ }
+
+ if (trailers) {
+ cl->buf->last_buf = 0;
+ }
}

- ngx_http_v2_queue_frame(h2c, frame);
+ if (frame_size || cl->buf->last_buf) {
+ frame = ngx_http_v2_filter_get_data_frame(stream, frame_size, out,
+ cl);
+ if (frame == NULL) {
+ return NGX_CHAIN_ERROR;
+ }

- h2c->send_window -= frame_size;
+ ngx_http_v2_queue_frame(h2c, frame);

- stream->send_window -= frame_size;
- stream->queued++;
+ h2c->send_window -= frame_size;
+
+ stream->send_window -= frame_size;
+ stream->queued++;
+ }
+
+ if (trailers) {
+ ngx_http_v2_queue_frame(h2c, trailers);
+ stream->queued++;
+ }

if (in == NULL) {
break;
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[PATCH 3 of 3] Headers filter: added "add_trailer" directive

$
0
0
# HG changeset patch
# User Piotr Sikora <piotrsikora@google.com>
# Date 1490351854 25200
# Fri Mar 24 03:37:34 2017 -0700
# Node ID acdc80c0d4ef8aa2519e2882ff1a3bd4a316ad81
# Parent 8d74ff6c2015180f5c1f399f492214d7d0a52b3f
Headers filter: added "add_trailer" directive.

Trailers added using this directive are evaluated after response body
is processed by output filters (but before it's written to the wire),
so it's possible to use variables calculated from the response body
as the trailer value.

Signed-off-by: Piotr Sikora <piotrsikora@google.com>

diff -r 8d74ff6c2015 -r acdc80c0d4ef src/http/modules/ngx_http_headers_filter_module.c
--- a/src/http/modules/ngx_http_headers_filter_module.c
+++ b/src/http/modules/ngx_http_headers_filter_module.c
@@ -48,6 +48,7 @@ typedef struct {
time_t expires_time;
ngx_http_complex_value_t *expires_value;
ngx_array_t *headers;
+ ngx_array_t *trailers;
} ngx_http_headers_conf_t;


@@ -72,6 +73,8 @@ static char *ngx_http_headers_expires(ng
void *conf);
static char *ngx_http_headers_add(ngx_conf_t *cf, ngx_command_t *cmd,
void *conf);
+static char *ngx_http_headers_add_trailer(ngx_conf_t *cf, ngx_command_t *cmd,
+ void *conf);


static ngx_http_set_header_t ngx_http_set_headers[] = {
@@ -108,6 +111,14 @@ static ngx_command_t ngx_http_headers_f
0,
NULL },

+ { ngx_string("add_trailer"),
+ NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF
+ |NGX_CONF_TAKE23,
+ ngx_http_headers_add_trailer,
+ NGX_HTTP_LOC_CONF_OFFSET,
+ 0,
+ NULL },
+
ngx_null_command
};

@@ -144,6 +155,7 @@ ngx_module_t ngx_http_headers_filter_mo


static ngx_http_output_header_filter_pt ngx_http_next_header_filter;
+static ngx_http_output_body_filter_pt ngx_http_next_body_filter;


static ngx_int_t
@@ -154,10 +166,15 @@ ngx_http_headers_filter(ngx_http_request
ngx_http_header_val_t *h;
ngx_http_headers_conf_t *conf;

+ if (r != r->main) {
+ return ngx_http_next_header_filter(r);
+ }
+
conf = ngx_http_get_module_loc_conf(r, ngx_http_headers_filter_module);

- if ((conf->expires == NGX_HTTP_EXPIRES_OFF && conf->headers == NULL)
- || r != r->main)
+ if (conf->expires == NGX_HTTP_EXPIRES_OFF
+ && conf->headers == NULL
+ && conf->trailers == NULL)
{
return ngx_http_next_header_filter(r);
}
@@ -206,11 +223,103 @@ ngx_http_headers_filter(ngx_http_request
}
}

+ if (conf->trailers) {
+ h = conf->trailers->elts;
+ for (i = 0; i < conf->trailers->nelts; i++) {
+
+ if (!safe_status && !h[i].always) {
+ continue;
+ }
+
+ if (h[i].value.value.len) {
+ r->expect_trailers = 1;
+ break;
+ }
+ }
+ }
+
return ngx_http_next_header_filter(r);
}


static ngx_int_t
+ngx_http_trailers_filter(ngx_http_request_t *r, ngx_chain_t *in)
+{
+ ngx_str_t value;
+ ngx_uint_t i, safe_status;
+ ngx_chain_t *cl;
+ ngx_table_elt_t *t;
+ ngx_http_header_val_t *h;
+ ngx_http_headers_conf_t *conf;
+
+ conf = ngx_http_get_module_loc_conf(r, ngx_http_headers_filter_module);
+
+ if (in == NULL
+ || conf->trailers == NULL
+ || !r->expect_trailers
+ || r->header_only)
+ {
+ return ngx_http_next_body_filter(r, in);
+ }
+
+ for (cl = in; cl; cl = cl->next) {
+ if (cl->buf->last_buf) {
+ break;
+ }
+ }
+
+ if (cl == NULL) {
+ return ngx_http_next_body_filter(r, in);
+ }
+
+ switch (r->headers_out.status) {
+
+ case NGX_HTTP_OK:
+ case NGX_HTTP_CREATED:
+ case NGX_HTTP_NO_CONTENT:
+ case NGX_HTTP_PARTIAL_CONTENT:
+ case NGX_HTTP_MOVED_PERMANENTLY:
+ case NGX_HTTP_MOVED_TEMPORARILY:
+ case NGX_HTTP_SEE_OTHER:
+ case NGX_HTTP_NOT_MODIFIED:
+ case NGX_HTTP_TEMPORARY_REDIRECT:
+ case NGX_HTTP_PERMANENT_REDIRECT:
+ safe_status = 1;
+ break;
+
+ default:
+ safe_status = 0;
+ break;
+ }
+
+ h = conf->trailers->elts;
+ for (i = 0; i < conf->trailers->nelts; i++) {
+
+ if (!safe_status && !h[i].always) {
+ continue;
+ }
+
+ if (ngx_http_complex_value(r, &h[i].value, &value) != NGX_OK) {
+ return NGX_ERROR;
+ }
+
+ if (value.len) {
+ t = ngx_list_push(&r->headers_out.trailers);
+ if (t == NULL) {
+ return NGX_ERROR;
+ }
+
+ t->key = h[i].key;
+ t->value = value;
+ t->hash = 1;
+ }
+ }
+
+ return ngx_http_next_body_filter(r, in);
+}
+
+
+static ngx_int_t
ngx_http_set_expires(ngx_http_request_t *r, ngx_http_headers_conf_t *conf)
{
char *err;
@@ -557,6 +666,7 @@ ngx_http_headers_create_conf(ngx_conf_t
* set by ngx_pcalloc():
*
* conf->headers = NULL;
+ * conf->trailers = NULL;
* conf->expires_time = 0;
* conf->expires_value = NULL;
*/
@@ -587,6 +697,10 @@ ngx_http_headers_merge_conf(ngx_conf_t *
conf->headers = prev->headers;
}

+ if (conf->trailers == NULL) {
+ conf->trailers = prev->trailers;
+ }
+
return NGX_CONF_OK;
}

@@ -597,6 +711,9 @@ ngx_http_headers_filter_init(ngx_conf_t
ngx_http_next_header_filter = ngx_http_top_header_filter;
ngx_http_top_header_filter = ngx_http_headers_filter;

+ ngx_http_next_body_filter = ngx_http_top_body_filter;
+ ngx_http_top_body_filter = ngx_http_trailers_filter;
+
return NGX_OK;
}

@@ -741,3 +858,63 @@ ngx_http_headers_add(ngx_conf_t *cf, ngx

return NGX_CONF_OK;
}
+
+
+static char *
+ngx_http_headers_add_trailer(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
+{
+ ngx_http_headers_conf_t *hcf = conf;
+
+ ngx_str_t *value;
+ ngx_http_header_val_t *hv;
+ ngx_http_compile_complex_value_t ccv;
+
+ value = cf->args->elts;
+
+ if (hcf->trailers == NULL) {
+ hcf->trailers = ngx_array_create(cf->pool, 1,
+ sizeof(ngx_http_header_val_t));
+ if (hcf->trailers == NULL) {
+ return NGX_CONF_ERROR;
+ }
+ }
+
+ hv = ngx_array_push(hcf->trailers);
+ if (hv == NULL) {
+ return NGX_CONF_ERROR;
+ }
+
+ hv->key = value[1];
+ hv->handler = NULL;
+ hv->offset = 0;
+ hv->always = 0;
+
+ if (value[2].len == 0) {
+ ngx_memzero(&hv->value, sizeof(ngx_http_complex_value_t));
+
+ } else {
+ ngx_memzero(&ccv, sizeof(ngx_http_compile_complex_value_t));
+
+ ccv.cf = cf;
+ ccv.value = &value[2];
+ ccv.complex_value = &hv->value;
+
+ if (ngx_http_compile_complex_value(&ccv) != NGX_OK) {
+ return NGX_CONF_ERROR;
+ }
+ }
+
+ if (cf->args->nelts == 3) {
+ return NGX_CONF_OK;
+ }
+
+ if (ngx_strcmp(value[3].data, "always") != 0) {
+ ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+ "invalid parameter \"%V\"", &value[3]);
+ return NGX_CONF_ERROR;
+ }
+
+ hv->always = 1;
+
+ return NGX_CONF_OK;
+}
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH 2 of 3] Headers filter: add "add_trailer" directive

$
0
0
Hey Maxim,

> (Just for the record, with the first patch fixed to avoid using
> chunked with HTTP/1.0, the "Trailer" header is expectedly still
> added with HTTP/1.0. This confirms the idea that the approach
> choosen is somewhat fragile.)

It confirms no such thing. The only thing it confirms is that making
major changes during code review for a code that was written almost a
year ago is an error-prone process.

> The question is: if we need this indicator to be sent to a
> particular client.
>
> For example, if you are using trailers to pass additional logging
> information to your own frontends, and use something like
>
> geo $mine {
> 127.0.0.1/8 1;
> }
>
> map $mine $x_request_time {
> 1 $request_time;
> }
>
> add_trailer X-Response-Time $x_request_time;
>
> to send the information to your frontends, but not other clients,
> you probably don't want the X-Response-Time trailer to be
> indicated to other clients.

In such setup, clients would probably talk to frontends.

> Acutally I don't see how it's a problem, given that "Trailer" is
> not something required. Moreover, it seems to be not needed or
> even harmful in most of the use cases discussed.

I don't think it's harmful, but I'm not aware of any clients that
_require_ "Trailer" header, so I'm going to skip this whole discussion
and just remove it. You might always re-add it later if needed.

Best regards,
Piotr Sikora
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Use primes for hashtable size

$
0
0
Hello world!

2017-06-02 16:46 GMT+05:00 Maxim Dounin <mdounin@mdounin.ru>:
> Hello!
>
> On Fri, Jun 02, 2017 at 10:56:31AM +1000, Mathew Heard wrote:
>
>> If this actually yields a decrease in start time while not introducing
>> other effects we would use it. Our start time of a couple minutes is
>> annoying at times.
>
> Do you have any details of what contributes to the start time in
> your case?
>
> In general, nginx trades start time to faster operation once
> started, and trying to build minimal hash is an example of this
> practice. If it results in unacceptable start times we certainly
> should optimize it, though I don't think I've seen such cases in
> my practice.
>
> Most time-consuming things during start I've seen so far are:
>
> - multiple DNS names in the configuration and slow system
> resolver;
>
> - multiple SSL certificates;
>
> - very large geo{} maps.

Mathew could you plz make some profiling?
As I see it:
1. htop during nginx start, if you see spikes of CPU proceed to 2, else end;
2. perf top during nginx start, if you see ngx_hash_init proceed to 3,
else send here top slow functions
3. measure ngxinx start time with and without this patch

Maybe, we could crunch some of these 2 minutes of startup. Thank you for help.

Best regards, Andrey Borodin, Octonica.
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Sometimes NGINX returns 405 on POST, when 504 GATEWAY TIMEOUT is expected

$
0
0
Hey,

yesterday I had a situation where NGINX *sometimes under some
configurations* returned a 405 METHOD NOT ALLOWED, when it was supposed
to return a 504 GATEWAY TIMEOUT.
Since troubleshooting this took a while, and information that I found
was fragmented, outdated, or inaccurate, I wrote a blog post about this.
Maybe it helps others, including my future self, to understand and fix
this behavior quicker:

http://muratknecht.de/tech/why-nginx-returns-405-post-504-gateway-timeout-gotchas-error-page/

On SO, a related question was asked:
https://stackoverflow.com/questions/42167669/nginx-405-not-allowed-fastcgi-timeout/44330457#44330457

Cheers,
murat


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

SSL nginx on tomcat Server

$
0
0
Hi ,

after configuring Nginx SSL with Tomcat 7

if I Type URL: https://test.rockwell.co.in/testril (page Work Fine ! and secured)
now if I log in to my application, URL get changed to
http://test.rockwell.co.in:5323/testril/ (which is not expected)
and not secured

Where am i going Wrong !
Please guide me

Nginx config:

# Tomcat we're forwarding to
upstream tomcat_server {
server 127.0.0.1:9090 fail_timeout=0;
}

server {
listen 443 ssl;
server_name rockwell.co.in;

#HTTPS Setup
ssl on;
ssl_certificate rbundle.crt;
ssl_certificate_key testserver.key;

ssl_session_timeout 5m;

ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

location / {
# Forward SSL so that Tomcat knows what to do
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://test.rockwell.co.in:5323;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Real-IP $remote_addr;

proxy_redirect off;
proxy_connect_timeout 240;
proxy_send_timeout 240;
proxy_read_timeout 240;
}

Tomcat Server Conf :

<Service name="Catalina">

<Connector port="5323" protocol="HTTP/1.1"
connectionTimeout="20000"
URIEncoding="UTF-8"
redirectPort="8443"
acceptCount="100"
compressableMimeType="text/html,text/xml,text/javascript,application/x-javascript,application/javascript"
compression="on"
compressionMinSize="2048"
disableUploadTimeout="true"
enableLookups="false"
maxHttpHeaderSize="8192"
Server =" "
usehttponly="true"
/>

<!-- A "Connector" using the shared thread pool-->

<Connector executor="tomcatThreadPool"
port="9090" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />

<Engine name="Catalina" defaultHost="localhost">

<Realm className="org.apache.catalina.realm.LockOutRealm">

<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>

<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">

Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.RemoteIpValve"
remoteIpHeader="x-forwarded-for"
ProxiesHeader="x-forwarded-by"
protocolHeader="x-forwarded-proto"
protocolHeaderHttpsValue="https"/>

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="%h %l %u %t &quot;%r&quot; %s %b" />

</Host>
</Engine>
</Service>

Re: client_body_temp_path - permissions

$
0
0
> В таком виде, конечно, не закоммитят. Я тут нагло воспользовался
> существованием внутреннего флага, который по факту используется для
> webdav-модуля, и "вытащил" его в конфигурацию, потому патч и такой
> простой. Если уж делать по-человечески - то делать нормальную
> конфигурацию по аналогии с proxy_store_access. Но это немного сложнее,
> и такой патч, вероятно, уже не будет так легко накладываться на
> практически любую версию.

Спасибо за патч, но мы не можем прописывать в требования нашего ПО, установку патча для Nginx, клиенты этого не поймут.

Сейчас мы запускаем бекенд под пользователем Nginx, чтобы иметь доступ к файлам 0600, но это тоже не очень хорошо.

Re: SSL nginx on tomcat Server

$
0
0
https://serverfault.com/questions/172542/configuring-nginx-for-use-with-tomcat-and-ssl
See the connector section.

"server" directive is not allowed here error

$
0
0
Hello,

I'm hoping someone can help me with this nginx config issue that I'm having.. I can't seem to figure out what the problem is. If I set with the a location directive "location /" it works fine. However, I seem to be having an issue with modsecurity breaking one of my applications, so I figured I split the nginx config into multiple location directives and disable modsecurity on the location with the broken application that I'm having a problem with and have it enabled on the ones that I don't have a problem with.

So, let me start off with the config that actually works below:

server {
listen 443 ssl;
server_name server.domain.tld;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
keepalive_timeout 70;

ssl_certificate /etc/nginx/ssl/domain.tld.pem;
ssl_certificate_key /etc/nginx/ssl/domain.tld.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
client_max_body_size 4G;
set_real_ip_from 192.xxx.xxx.xxx;
real_ip_header X-Real-IP;
real_ip_recursive on;
modsecurity on;

location / {
modsecurity_rules_file /usr/local/nginx/conf/modsecurity.conf;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://server.domain.tld:9080;
}

}

Unfortunately, in the config above modsecurity breaks one of my applications under the /web directory, so https://server.domain.tld:9080/web breaks.

So, I setup the following config, where I removed "modsecurity_rules_file /usr/local/nginx/conf/modsecurity.conf" from the " location /web" directive..


server {
listen 443 ssl;
server_name server.domain.tld;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
keepalive_timeout 70;

ssl_certificate /etc/nginx/ssl/domain.tld.pem;
ssl_certificate_key /etc/nginx/ssl/domain.tld.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
client_max_body_size 4G;
set_real_ip_from 192.xxx.xxx.xxx;
real_ip_header X-Real-IP;
real_ip_recursive on;
modsecurity on;

location /web {
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://server.domain.tld:9080:9080/web;
}

location /admin {
modsecurity_rules_file /usr/local/nginx/conf/modsecurity.conf;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://server.domain.tld:9080:9080/admin;
}

location /main {
modsecurity_rules_file /usr/local/nginx/conf/modsecurity.conf;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://server.domain.tld:9080:9080/main;
}

location /tasks {
modsecurity_rules_file /usr/local/nginx/conf/modsecurity.conf;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://server.domain.tld:9080:9080/tasks;
}


}


However, the configuration below gives me the following error:

[emerg] 19968#0: "server" directive is not allowed here in /usr/local/nginx/conf/sites-enabled/server.domain.tld-ssl:1

Googling the error, kept bring up results about the server directive being inside an http directive, which I don't obviously have or have a need for. I would appreciate some help on this.

Thank you



_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Same cached objects, but different body_bytes_sent

$
0
0
Hi, Guilherme!

The HTTP status code 499, which means client closed the connection before
Nginx even sent one byte.
As long as Nginx sent some bytes, 499 will not arise, and Nginx just record
the code generated previously, also, i bet your log_format of your
access_log is the default one provided by Nginx, it is helpless when we
need to speculate whether
client closed the connection. Maybe you can modify your log_format such as
appending “$http_content_length”, you can analysis this case by comparing
the value of “$http_content_length” and “$body_bytes_sent”, of course the
“Accept-Encoding” header can never be passed.

On 3 June 2017 at 00:45:09, Guilherme (guilherme.e@gmail.com) wrote:

@itpp2012:

I cant replicate the problem using curl from 2 different locations.

Its not supposed to return 206 in range requests?

@zhang_chao:

I'm not sure about this, but its not supposed to return 499 in this case?

Tks,

Guilherme

On Fri, Jun 2, 2017 at 3:45 AM, Zhang Chao <zchao1995@gmail.com> wrote:

> Hi!
>
> Are you sure the client didn't close the connection when the body is
> transferring?
>
>
> On 2 June 2017 at 10:00:36, Guilherme (guilherme.e@gmail.com) wrote:
>
> I identified a strange behavior in my nginx/1.11.2. Same cached objects
> are returning different content length. In the logs below, body_bytes_sent
> changes intermittently between 215 and 3782 bytes. The correct length is
> 3782. (these objects are not being updated in this interval)
>
> xxxxxxxxxx - - [02/Jun/2017:01:29:06 +0000] "GET
> /img/app/bt_google_play.png HTTP/2.0" 200 *215* "xxxxxxxxxx" "Mozilla/5.0
> (Linux; Android 6.0.1; SM-G600FY Build/MMB29M) AppleWebKit/537.36 (KHTML,
> like Gecko) Chrome/58.0.3029.83 Mobile Safari/537.36" 42 215 10.571
> "image/png" HIT
> xxxxxxxxxx - - [02/Jun/2017:01:29:50 +0000] "GET
> /img/app/bt_google_play.png HTTP/2.0" 200 *3782* "xxxxxxxxxx"
> "Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_2 like Mac OS X)
> AppleWebKit/603.2.4 (KHTML, like Gecko) Version/10.0 Mobile/14F89
> Safari/602.1" 32 3791 0.344 "image/png" HIT
>
> ** request_time is always high for the shorter requests*
>
> I'm ignoring Vary header in proxy_ignore_headers too.
>
> Any idea about this?
>
> Tks,
>
> Guilherme
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

RE: "server" directive is not allowed here error

$
0
0
> [emerg] 19968#0: "server" directive is not allowed here in /usr/local/nginx/conf/sites-enabled/server.domain.tld-ssl:1
>
> Googling the error, kept bring up results about the server directive being inside an http directive, which I don’t obviously have or have a need for. I would appreciate some help on this.


You can't have server {} block outside http {} ( http://nginx.org/en/docs/http/ngx_http_core_module.html#server )

So it has to be:

http {
server {
// whatever goes here
}
}


tt

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[PATCH] Proxy: always emit "Host" header first

$
0
0
# HG changeset patch
# User Piotr Sikora <piotrsikora@google.com>
# Date 1489618489 25200
# Wed Mar 15 15:54:49 2017 -0700
# Node ID e472b23fdc387943ea90fb2f0ae415d9d104edc7
# Parent 716852cce9136d977b81a2d1b8b6f9fbca0dce49
Proxy: always emit "Host" header first.

Signed-off-by: Piotr Sikora <piotrsikora@google.com>

diff -r 716852cce913 -r e472b23fdc38 src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -3412,7 +3412,7 @@ ngx_http_proxy_init_headers(ngx_conf_t *
uintptr_t *code;
ngx_uint_t i;
ngx_array_t headers_names, headers_merged;
- ngx_keyval_t *src, *s, *h;
+ ngx_keyval_t *host, *src, *s, *h;
ngx_hash_key_t *hk;
ngx_hash_init_t hash;
ngx_http_script_compile_t sc;
@@ -3444,11 +3444,33 @@ ngx_http_proxy_init_headers(ngx_conf_t *
return NGX_ERROR;
}

+ h = default_headers;
+
+ if (h->key.len != sizeof("Host") - 1
+ || ngx_strcasecmp(h->key.data, (u_char *) "Host") != 0)
+ {
+ return NGX_ERROR;
+ }
+
+ host = ngx_array_push(&headers_merged);
+ if (host == NULL) {
+ return NGX_ERROR;
+ }
+
+ *host = *h++;
+
if (conf->headers_source) {

src = conf->headers_source->elts;
for (i = 0; i < conf->headers_source->nelts; i++) {

+ if (src[i].key.len == sizeof("Host") - 1
+ && ngx_strcasecmp(src[i].key.data, (u_char *) "Host") == 0)
+ {
+ *host = src[i];
+ continue;
+ }
+
s = ngx_array_push(&headers_merged);
if (s == NULL) {
return NGX_ERROR;
@@ -3458,8 +3480,6 @@ ngx_http_proxy_init_headers(ngx_conf_t *
}
}

- h = default_headers;
-
while (h->key.len) {

src = headers_merged.elts;
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[PATCH] Proxy: split configured header names and values

$
0
0
# HG changeset patch
# User Piotr Sikora <piotrsikora@google.com>
# Date 1489618535 25200
# Wed Mar 15 15:55:35 2017 -0700
# Node ID ff79d6887fc92d0344eac3e87339583265241e36
# Parent 716852cce9136d977b81a2d1b8b6f9fbca0dce49
Proxy: split configured header names and values.

Previously, each configured header was represented in one of two ways,
depending on whether or not its value included any variables.

If the value didn't include any variables, then it would be represented
as as a single script that contained complete header line with HTTP/1.1
delimiters, i.e.:

"Header: value\r\n"

But if the value included any variables, then it would be represented
as a series of three scripts: first contained header name and the ":"
delimiter, second evaluated to header value, and third contained only
"\r\n", i.e.:

"Header:"
"$value"
"\r\n"

This commit changes that, so that each configured header is represented
as a series of two scripts: first contains only header name, and second
contains (or evaluates to) only header value, i.e.:

"Header"
"$value"

or

"Header"
"value"

This not only makes things more consistent, but also allows header name
and value to be accessed separately.

Signed-off-by: Piotr Sikora <piotrsikora@google.com>

diff -r 716852cce913 -r ff79d6887fc9 src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -1144,6 +1144,7 @@ static ngx_int_t
ngx_http_proxy_create_request(ngx_http_request_t *r)
{
size_t len, uri_len, loc_len, body_len;
+ size_t key_len, val_len;
uintptr_t escape;
ngx_buf_t *b;
ngx_str_t method;
@@ -1258,10 +1259,17 @@ ngx_http_proxy_create_request(ngx_http_r
le.flushed = 1;

while (*(uintptr_t *) le.ip) {
- while (*(uintptr_t *) le.ip) {
+ lcode = *(ngx_http_script_len_code_pt *) le.ip;
+ key_len = lcode(&le);
+
+ for (val_len = 0; *(uintptr_t *) le.ip; val_len += lcode(&le)) {
lcode = *(ngx_http_script_len_code_pt *) le.ip;
- len += lcode(&le);
}
+
+ if (val_len) {
+ len += key_len + sizeof(": ") - 1 + val_len + sizeof(CRLF) - 1;
+ }
+
le.ip += sizeof(uintptr_t);
}

@@ -1363,28 +1371,32 @@ ngx_http_proxy_create_request(ngx_http_r

while (*(uintptr_t *) le.ip) {
lcode = *(ngx_http_script_len_code_pt *) le.ip;
-
- /* skip the header line name length */
(void) lcode(&le);

- if (*(ngx_http_script_len_code_pt *) le.ip) {
-
- for (len = 0; *(uintptr_t *) le.ip; len += lcode(&le)) {
- lcode = *(ngx_http_script_len_code_pt *) le.ip;
- }
-
- e.skip = (len == sizeof(CRLF) - 1) ? 1 : 0;
-
- } else {
- e.skip = 0;
+ for (val_len = 0; *(uintptr_t *) le.ip; val_len += lcode(&le)) {
+ lcode = *(ngx_http_script_len_code_pt *) le.ip;
}

le.ip += sizeof(uintptr_t);

+ e.skip = (val_len == 0) ? 1 : 0;
+
+ code = *(ngx_http_script_code_pt *) e.ip;
+ code((ngx_http_script_engine_t *) &e);
+
+ if (!e.skip) {
+ *e.pos++ = ':'; *e.pos++ = ' ';
+ }
+
while (*(uintptr_t *) e.ip) {
code = *(ngx_http_script_code_pt *) e.ip;
code((ngx_http_script_engine_t *) &e);
}
+
+ if (!e.skip) {
+ *e.pos++ = CR; *e.pos++ = LF;
+ }
+
e.ip += sizeof(uintptr_t);
}

@@ -3498,6 +3510,30 @@ ngx_http_proxy_init_headers(ngx_conf_t *
continue;
}

+ copy = ngx_array_push_n(headers->lengths,
+ sizeof(ngx_http_script_copy_code_t));
+ if (copy == NULL) {
+ return NGX_ERROR;
+ }
+
+ copy->code = (ngx_http_script_code_pt) ngx_http_script_copy_len_code;
+ copy->len = src[i].key.len;
+
+ size = (sizeof(ngx_http_script_copy_code_t)
+ + src[i].key.len + sizeof(uintptr_t) - 1)
+ & ~(sizeof(uintptr_t) - 1);
+
+ copy = ngx_array_push_n(headers->values, size);
+ if (copy == NULL) {
+ return NGX_ERROR;
+ }
+
+ copy->code = ngx_http_script_copy_code;
+ copy->len = src[i].key.len;
+
+ p = (u_char *) copy + sizeof(ngx_http_script_copy_code_t);
+ ngx_memcpy(p, src[i].key.data, src[i].key.len);
+
if (ngx_http_script_variables_count(&src[i].value) == 0) {
copy = ngx_array_push_n(headers->lengths,
sizeof(ngx_http_script_copy_code_t));
@@ -3507,14 +3543,10 @@ ngx_http_proxy_init_headers(ngx_conf_t *

copy->code = (ngx_http_script_code_pt)
ngx_http_script_copy_len_code;
- copy->len = src[i].key.len + sizeof(": ") - 1
- + src[i].value.len + sizeof(CRLF) - 1;
-
+ copy->len = src[i].value.len;

size = (sizeof(ngx_http_script_copy_code_t)
- + src[i].key.len + sizeof(": ") - 1
- + src[i].value.len + sizeof(CRLF) - 1
- + sizeof(uintptr_t) - 1)
+ + src[i].value.len + sizeof(uintptr_t) - 1)
& ~(sizeof(uintptr_t) - 1);

copy = ngx_array_push_n(headers->values, size);
@@ -3523,45 +3555,12 @@ ngx_http_proxy_init_headers(ngx_conf_t *
}

copy->code = ngx_http_script_copy_code;
- copy->len = src[i].key.len + sizeof(": ") - 1
- + src[i].value.len + sizeof(CRLF) - 1;
+ copy->len = src[i].value.len;

p = (u_char *) copy + sizeof(ngx_http_script_copy_code_t);
-
- p = ngx_cpymem(p, src[i].key.data, src[i].key.len);
- *p++ = ':'; *p++ = ' ';
- p = ngx_cpymem(p, src[i].value.data, src[i].value.len);
- *p++ = CR; *p = LF;
+ ngx_memcpy(p, src[i].value.data, src[i].value.len);

} else {
- copy = ngx_array_push_n(headers->lengths,
- sizeof(ngx_http_script_copy_code_t));
- if (copy == NULL) {
- return NGX_ERROR;
- }
-
- copy->code = (ngx_http_script_code_pt)
- ngx_http_script_copy_len_code;
- copy->len = src[i].key.len + sizeof(": ") - 1;
-
-
- size = (sizeof(ngx_http_script_copy_code_t)
- + src[i].key.len + sizeof(": ") - 1 + sizeof(uintptr_t) - 1)
- & ~(sizeof(uintptr_t) - 1);
-
- copy = ngx_array_push_n(headers->values, size);
- if (copy == NULL) {
- return NGX_ERROR;
- }
-
- copy->code = ngx_http_script_copy_code;
- copy->len = src[i].key.len + sizeof(": ") - 1;
-
- p = (u_char *) copy + sizeof(ngx_http_script_copy_code_t);
- p = ngx_cpymem(p, src[i].key.data, src[i].key.len);
- *p++ = ':'; *p = ' ';
-
-
ngx_memzero(&sc, sizeof(ngx_http_script_compile_t));

sc.cf = cf;
@@ -3573,33 +3572,6 @@ ngx_http_proxy_init_headers(ngx_conf_t *
if (ngx_http_script_compile(&sc) != NGX_OK) {
return NGX_ERROR;
}
-
-
- copy = ngx_array_push_n(headers->lengths,
- sizeof(ngx_http_script_copy_code_t));
- if (copy == NULL) {
- return NGX_ERROR;
- }
-
- copy->code = (ngx_http_script_code_pt)
- ngx_http_script_copy_len_code;
- copy->len = sizeof(CRLF) - 1;
-
-
- size = (sizeof(ngx_http_script_copy_code_t)
- + sizeof(CRLF) - 1 + sizeof(uintptr_t) - 1)
- & ~(sizeof(uintptr_t) - 1);
-
- copy = ngx_array_push_n(headers->values, size);
- if (copy == NULL) {
- return NGX_ERROR;
- }
-
- copy->code = ngx_http_script_copy_code;
- copy->len = sizeof(CRLF) - 1;
-
- p = (u_char *) copy + sizeof(ngx_http_script_copy_code_t);
- *p++ = CR; *p = LF;
}

code = ngx_array_push_n(headers->lengths, sizeof(uintptr_t));
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[PATCH] Proxy: add "proxy_ssl_alpn" directive

$
0
0
# HG changeset patch
# User Piotr Sikora <piotrsikora@google.com>
# Date 1489621682 25200
# Wed Mar 15 16:48:02 2017 -0700
# Node ID 7733d946e2651a2486a53d912703e2dfaea30421
# Parent 716852cce9136d977b81a2d1b8b6f9fbca0dce49
Proxy: add "proxy_ssl_alpn" directive.

ALPN is used here only to indicate which version of the HTTP protocol
is going to be used and we doesn't verify that upstream agreed to it.

Please note that upstream is allowed to reject SSL connection with a
fatal "no_application_protocol" alert if it doesn't support it.

Signed-off-by: Piotr Sikora <piotrsikora@google.com>

diff -r 716852cce913 -r 7733d946e265 src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c
+++ b/src/event/ngx_event_openssl.c
@@ -654,6 +654,29 @@ ngx_ssl_ciphers(ngx_conf_t *cf, ngx_ssl_


ngx_int_t
+ngx_ssl_alpn_protos(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *protos)
+{
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+
+ if (SSL_CTX_set_alpn_protos(ssl->ctx, protos->data, protos->len) != 0) {
+ ngx_ssl_error(NGX_LOG_EMERG, ssl->log, 0,
+ "SSL_CTX_set_alpn_protos() failed");
+ return NGX_ERROR;
+ }
+
+ return NGX_OK;
+
+#else
+
+ ngx_log_error(NGX_LOG_EMERG, cf->log, 0,
+ "nginx was built with OpenSSL that lacks ALPN support");
+ return NGX_ERROR;
+
+#endif
+}
+
+
+ngx_int_t
ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *cert,
ngx_int_t depth)
{
diff -r 716852cce913 -r 7733d946e265 src/event/ngx_event_openssl.h
--- a/src/event/ngx_event_openssl.h
+++ b/src/event/ngx_event_openssl.h
@@ -153,6 +153,8 @@ ngx_int_t ngx_ssl_certificate(ngx_conf_t
ngx_str_t *cert, ngx_str_t *key, ngx_array_t *passwords);
ngx_int_t ngx_ssl_ciphers(ngx_conf_t *cf, ngx_ssl_t *ssl, ngx_str_t *ciphers,
ngx_uint_t prefer_server_ciphers);
+ngx_int_t ngx_ssl_alpn_protos(ngx_conf_t *cf, ngx_ssl_t *ssl,
+ ngx_str_t *protos);
ngx_int_t ngx_ssl_client_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
ngx_str_t *cert, ngx_int_t depth);
ngx_int_t ngx_ssl_trusted_certificate(ngx_conf_t *cf, ngx_ssl_t *ssl,
diff -r 716852cce913 -r 7733d946e265 src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -652,6 +652,13 @@ static ngx_command_t ngx_http_proxy_com
offsetof(ngx_http_proxy_loc_conf_t, ssl_ciphers),
NULL },

+ { ngx_string("proxy_ssl_alpn"),
+ NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_FLAG,
+ ngx_conf_set_flag_slot,
+ NGX_HTTP_LOC_CONF_OFFSET,
+ offsetof(ngx_http_proxy_loc_conf_t, upstream.ssl_alpn),
+ NULL },
+
{ ngx_string("proxy_ssl_name"),
NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
ngx_http_set_complex_value_slot,
@@ -2882,6 +2889,7 @@ ngx_http_proxy_create_loc_conf(ngx_conf_
conf->upstream.intercept_errors = NGX_CONF_UNSET;

#if (NGX_HTTP_SSL)
+ conf->upstream.ssl_alpn = NGX_CONF_UNSET;
conf->upstream.ssl_session_reuse = NGX_CONF_UNSET;
conf->upstream.ssl_server_name = NGX_CONF_UNSET;
conf->upstream.ssl_verify = NGX_CONF_UNSET;
@@ -3212,6 +3220,8 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t
conf->upstream.ssl_name = prev->upstream.ssl_name;
}

+ ngx_conf_merge_value(conf->upstream.ssl_alpn,
+ prev->upstream.ssl_alpn, 0);
ngx_conf_merge_value(conf->upstream.ssl_server_name,
prev->upstream.ssl_server_name, 0);
ngx_conf_merge_value(conf->upstream.ssl_verify,
@@ -4320,6 +4330,7 @@ ngx_http_proxy_lowat_check(ngx_conf_t *c
static ngx_int_t
ngx_http_proxy_set_ssl(ngx_conf_t *cf, ngx_http_proxy_loc_conf_t *plcf)
{
+ ngx_str_t alpn;
ngx_pool_cleanup_t *cln;

plcf->upstream.ssl = ngx_pcalloc(cf->pool, sizeof(ngx_ssl_t));
@@ -4366,6 +4377,24 @@ ngx_http_proxy_set_ssl(ngx_conf_t *cf, n
return NGX_ERROR;
}

+ if (plcf->upstream.ssl_alpn) {
+
+ switch (plcf->http_version) {
+
+ case NGX_HTTP_VERSION_10:
+ ngx_str_set(&alpn, NGX_HTTP_10_ALPN_ADVERTISE);
+ break;
+
+ case NGX_HTTP_VERSION_11:
+ ngx_str_set(&alpn, NGX_HTTP_11_ALPN_ADVERTISE);
+ break;
+ }
+
+ if (ngx_ssl_alpn_protos(cf, plcf->upstream.ssl, &alpn) != NGX_OK) {
+ return NGX_ERROR;
+ }
+ }
+
if (plcf->upstream.ssl_verify) {
if (plcf->ssl_trusted_certificate.len == 0) {
ngx_log_error(NGX_LOG_EMERG, cf->log, 0,
diff -r 716852cce913 -r 7733d946e265 src/http/modules/ngx_http_ssl_module.c
--- a/src/http/modules/ngx_http_ssl_module.c
+++ b/src/http/modules/ngx_http_ssl_module.c
@@ -17,8 +17,6 @@ typedef ngx_int_t (*ngx_ssl_variable_han
#define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5"
#define NGX_DEFAULT_ECDH_CURVE "auto"

-#define NGX_HTTP_NPN_ADVERTISE "\x08http/1.1"
-

#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
static int ngx_http_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn,
diff -r 716852cce913 -r 7733d946e265 src/http/ngx_http.h
--- a/src/http/ngx_http.h
+++ b/src/http/ngx_http.h
@@ -13,6 +13,11 @@
#include <ngx_core.h>


+#define NGX_HTTP_10_ALPN_ADVERTISE "\x08http/1.0"
+#define NGX_HTTP_11_ALPN_ADVERTISE "\x08http/1.1"
+#define NGX_HTTP_NPN_ADVERTISE NGX_HTTP_11_ALPN_ADVERTISE
+
+
typedef struct ngx_http_request_s ngx_http_request_t;
typedef struct ngx_http_upstream_s ngx_http_upstream_t;
typedef struct ngx_http_cache_s ngx_http_cache_t;
diff -r 716852cce913 -r 7733d946e265 src/http/ngx_http_upstream.h
--- a/src/http/ngx_http_upstream.h
+++ b/src/http/ngx_http_upstream.h
@@ -224,6 +224,7 @@ typedef struct {

#if (NGX_HTTP_SSL || NGX_COMPAT)
ngx_ssl_t *ssl;
+ ngx_flag_t ssl_alpn;
ngx_flag_t ssl_session_reuse;

ngx_http_complex_value_t *ssl_name;
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[PATCH] Upstream: ignore read-readiness if request wasn't sent

$
0
0
# HG changeset patch
# User Piotr Sikora <piotrsikora@google.com>
# Date 1491296505 25200
# Tue Apr 04 02:01:45 2017 -0700
# Node ID bff5ac3da350d8d9225d4204d8aded90fb670f3f
# Parent 716852cce9136d977b81a2d1b8b6f9fbca0dce49
Upstream: ignore read-readiness if request wasn't sent.

Signed-off-by: Piotr Sikora <piotrsikora@google.com>

diff -r 716852cce913 -r bff5ac3da350 src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -2179,8 +2179,12 @@ ngx_http_upstream_process_header(ngx_htt
return;
}

- if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) {
- ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
+ if (!u->request_sent) {
+ if (ngx_http_upstream_test_connect(c) != NGX_OK) {
+ ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR);
+ return;
+ }
+
return;
}

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[PATCH] Output chain: propagate flush and last_buf flags to send_chain()

$
0
0
# HG changeset patch
# User Piotr Sikora <piotrsikora@google.com>
# Date 1491708381 25200
# Sat Apr 08 20:26:21 2017 -0700
# Node ID 2a48b9b6e67d91594c1787ebf721daebf5f88c91
# Parent 716852cce9136d977b81a2d1b8b6f9fbca0dce49
Output chain: propagate flush and last_buf flags to send_chain().

Signed-off-by: Piotr Sikora <piotrsikora@google.com>

diff -r 716852cce913 -r 2a48b9b6e67d src/core/ngx_output_chain.c
--- a/src/core/ngx_output_chain.c
+++ b/src/core/ngx_output_chain.c
@@ -658,6 +658,7 @@ ngx_chain_writer(void *data, ngx_chain_t
ngx_chain_writer_ctx_t *ctx = data;

off_t size;
+ ngx_uint_t flush;
ngx_chain_t *cl, *ln, *chain;
ngx_connection_t *c;

@@ -689,9 +690,10 @@ ngx_chain_writer(void *data, ngx_chain_t

size += ngx_buf_size(in->buf);

- ngx_log_debug2(NGX_LOG_DEBUG_CORE, c->log, 0,
- "chain writer buf fl:%d s:%uO",
- in->buf->flush, ngx_buf_size(in->buf));
+ ngx_log_debug3(NGX_LOG_DEBUG_CORE, c->log, 0,
+ "chain writer buf fl:%d l:%d s:%uO",
+ in->buf->flush, in->buf->last_buf,
+ ngx_buf_size(in->buf));

cl = ngx_alloc_chain_link(ctx->pool);
if (cl == NULL) {
@@ -707,6 +709,8 @@ ngx_chain_writer(void *data, ngx_chain_t
ngx_log_debug1(NGX_LOG_DEBUG_CORE, c->log, 0,
"chain writer in: %p", ctx->out);

+ flush = 0;
+
for (cl = ctx->out; cl; cl = cl->next) {

#if 1
@@ -732,9 +736,13 @@ ngx_chain_writer(void *data, ngx_chain_t
#endif

size += ngx_buf_size(cl->buf);
+
+ if (cl->buf->flush || cl->buf->last_buf) {
+ flush = 1;
+ }
}

- if (size == 0 && !c->buffered) {
+ if (size == 0 && !flush && !c->buffered) {
return NGX_OK;
}

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Reverse proxy that forward requests to ALL upstream servers?

$
0
0
Hi,
I have a rather special requirement. I need to setup a reverse proxy with multiple upstream servers, and whenever a POST request comes in, I want NGINX to forward the request to ALL the upstream servers. And the response code will be the highest (worst) one among all responses from the upstream servers. Is it doable?
Thanks.Yongtao
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>