Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Re: Planned Features for gRPC Proxy

$
0
0
Hello!

On Fri, Mar 09, 2018 at 11:50:03AM -0500, Ian McGraw wrote:

> Hi all,
>
> I am new to the nginx community so my apologies if this is not
> the correct place for this kind of question.
>
> I see gRPC proxy is in progress for 1.13:
> https://trac.nginx.org/nginx/roadmap
>
> Does anyone know if the proxy will support host/path based
> routing for gRPC calls? I have a use case in Kubernetes where I
> am trying to expose many gRPC microservices through a single
> nginx ingress controller. I’m trying to find out if context
> based routing will be supported so I can setup rules to be able
> to proxy to different services.

Yes, it will be possible to proxy to different backend servers
based on normal server and location matching, much like it is
possible with proxy_pass and fastcgi_pass.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session

$
0
0
Hello!

On Thu, Mar 08, 2018 at 12:16:50PM +0100, Abilio Marques wrote:

> Using NGINX 1.12.2 on MIPS (haven't tested on x86), if I set:
>
> ssl_session_cache shared:SSL:1m; # it also fails with 10m
>
>
> And the client reestablishes the connection, it
> gets: net::ERR_SSL_BAD_RECORD_MAC_ALERT when trying to reuse SSL session.
>
> Has anyone seen anything like this?
>
>
> More detail:
>
> This was tested on 1.12.2, on a MIPS CPU, using OpenSSL 1.0.2j, and built
> by gcc 4.8.3 (OpenWrt/Linaro GCC 4.8-2014.04 r47070).

This certainly works on x86, so it must be something
MIPS-specific or something specific to your particular build.

Last time I saw OpenWrt/Linaro nginx builds, they were compiled
using buggy 3rd party crossbuild patches, and didn't work due to
this (see https://trac.nginx.org/nginx/ticket/899). You may want
to check your build before trying to do anything else.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How can i configure proxy multiple hosts for a domain?

$
0
0
It's not load balancing like round robin, least conn, ip bash.
I want to know how to proxy simultaneously to the registered proxy host for one domain.

I searched for this method, but all documents were about load balancing.
Please help me if you are aware of this problem.

Thank you in advance.

[PATCH] Contrib: vim syntax, update core and 3rd party module directives.

$
0
0
# HG changeset patch
# User Gena Makhomed <gmm@csdoc.com>
# Date 1519907674 -7200
# Thu Mar 01 14:34:34 2018 +0200
# Node ID 42536cf64a89641b90bc0db7223fe60703d663e0
# Parent 20f139e9ffa84f1a1db6039bbbb547cd35fc4534
Contrib: vim syntax, update core and 3rd party module directives.

diff -r 20f139e9ffa8 -r 42536cf64a89 contrib/vim/syntax/nginx.vim
--- a/contrib/vim/syntax/nginx.vim Wed Feb 28 16:56:58 2018 +0300
+++ b/contrib/vim/syntax/nginx.vim Thu Mar 01 14:34:34 2018 +0200
@@ -268,11 +268,14 @@
syn keyword ngxDirective contained http2_body_preread_size
syn keyword ngxDirective contained http2_chunk_size
syn keyword ngxDirective contained http2_idle_timeout
+syn keyword ngxDirective contained http2_max_concurrent_pushes
syn keyword ngxDirective contained http2_max_concurrent_streams
syn keyword ngxDirective contained http2_max_field_size
syn keyword ngxDirective contained http2_max_header_size
syn keyword ngxDirective contained http2_max_requests
syn keyword ngxDirective contained http2_pool_size
+syn keyword ngxDirective contained http2_push
+syn keyword ngxDirective contained http2_push_preload
syn keyword ngxDirective contained http2_recv_buffer_size
syn keyword ngxDirective contained http2_recv_timeout
syn keyword ngxDirective contained http2_streams_index_size
@@ -574,6 +577,7 @@
syn keyword ngxDirective contained sub_filter_last_modified
syn keyword ngxDirective contained sub_filter_once
syn keyword ngxDirective contained sub_filter_types
+syn keyword ngxDirective contained subrequest_output_buffer_size
syn keyword ngxDirective contained tcp_nodelay
syn keyword ngxDirective contained tcp_nopush
syn keyword ngxDirective contained thread_pool
@@ -2028,6 +2032,7 @@
syn keyword ngxDirectiveThirdParty contained selective_cache_purge_query
syn keyword ngxDirectiveThirdParty contained
selective_cache_purge_redis_database
syn keyword ngxDirectiveThirdParty contained
selective_cache_purge_redis_host
+syn keyword ngxDirectiveThirdParty contained
selective_cache_purge_redis_password
syn keyword ngxDirectiveThirdParty contained
selective_cache_purge_redis_port
syn keyword ngxDirectiveThirdParty contained
selective_cache_purge_redis_unix_socket

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: proxy_pass and trailing / decode uri

$
0
0
Hi,

When you want nginx to replace matching part of the URI with "/",
> it will do so on the decoded/normalized URI, and will re-encode
> special characters in what's left.
>
> If you want nginx to preserve original URI as sent by the client,
> consider using proxy_pass without the URI part. That is,
> instead of
>
> proxy_pass http://127.0.0.1:82/;
>
> use
>
> proxy_pass http://127.0.0.1:82;
>
> Note no trailing "/". This way the original URI as sent by the
> client will be preserved without any modifications.
>


Thank you for your answer but it is not correct for location different than
'/'. With your proposal, targeting http://domain1.com/api/foo/bar, socket
on port 82 receives: /api/foo/bar. I guess the only way to remove the /api
part is "rewrite" and involves re-encoding...

Max.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass and trailing / decode uri

$
0
0
Sorry for double post:
>
> I guess the only way to remove the /api part is "rewrite" and involves
> re-encoding...

=> I guess the only way to remove the /api without re-encoding URI is
"rewrite" ...

Max

2018-03-12 9:55 GMT+01:00 max <maxima078@gmail.com>:

> Hi,
>
> When you want nginx to replace matching part of the URI with "/",
>> it will do so on the decoded/normalized URI, and will re-encode
>> special characters in what's left.
>>
>> If you want nginx to preserve original URI as sent by the client,
>> consider using proxy_pass without the URI part. That is,
>> instead of
>>
>> proxy_pass http://127.0.0.1:82/;
>>
>> use
>>
>> proxy_pass http://127.0.0.1:82;
>>
>> Note no trailing "/". This way the original URI as sent by the
>> client will be preserved without any modifications.
>>
>
>
> Thank you for your answer but it is not correct for location different
> than '/'. With your proposal, targeting http://domain1.com/api/foo/bar,
> socket on port 82 receives: /api/foo/bar. I guess the only way to remove
> the /api part is "rewrite" and involves re-encoding...
>
> Max.
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How can i configure proxy multiple hosts for a domain?

$
0
0
Hi,

perhaps you should’ve explained your intention bit more in details, may be with
some functional schema.

As of my current understanding, you want something like port mirroring to duplicate
your network traffic.

anyways, it’s out of scope of nginx, search for “port/traffic mirroring”.


br,
Aziz.





> On 12 Mar 2018, at 05:56, mslee <nginx-forum@forum.nginx.org> wrote:
>
> It's not load balancing like round robin, least conn, ip bash.
> I want to know how to proxy simultaneously to the registered proxy host for
> one domain.
>
> I searched for this method, but all documents were about load balancing.
> Please help me if you are aware of this problem.
>
> Thank you in advance.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,278997,278997#msg-278997
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

1.13.9 compile errors

$
0
0
i'm building custom nginx binary on debian 9 using latest zlib and openssl sources.

./configure \
--prefix=/nginx \
--sbin-path=/nginx/sbin/nginx-https \
--conf-path=/nginx/conf/https \
--pid-path=/run/nginx-https.pid \
--error-log-path=/log/nginx-https_error.log \
--http-log-path=/log/nginx-https_access.log \
--http-client-body-temp-path=/nginx/tmp/https_client-body \
--http-proxy-temp-path=/nginx/tmp/https_proxy \
--http-fastcgi-temp-path=/nginx/tmp/http_fastcgi \
--user=www \
--group=www \
--without-select_module \
--without-poll_module \
--without-http_ssi_module \
--without-http_userid_module \
--without-http_geo_module \
--without-http_map_module \
--without-http_split_clients_module \
--without-http_referer_module \
--without-http_uwsgi_module \
--without-http_scgi_module \
--without-http_memcached_module \
--without-http_limit_conn_module \
--without-http_limit_req_module \
--without-http_empty_gif_module \
--without-http_browser_module \
--without-http_upstream_hash_module \
--without-http_upstream_ip_hash_module \
--without-http_upstream_least_conn_module \
--without-http_upstream_keepalive_module \
--without-http_upstream_zone_module \
--with-threads \
--with-file-aio \
--with-zlib=/install/zlib-1.2.11 \
--with-openssl=/install/openssl-1.1.1-pre2 \
--with-http_ssl_module

in the end of "make" i got this:

objs/ngx_modules.o \
-ldl -lpthread -lpthread -lcrypt -lpcre /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl /install/zlib-1.2.11/libz.a \
-Wl,-E
/install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a(threads_pthread.o): In function `fork_once_func':
threads_pthread.c:(.text+0x16): undefined reference to `pthread_atfork'
collect2: error: ld returned 1 exit status
objs/Makefile:223: recipe for target 'objs/nginx' failed
make[1]: *** [objs/nginx] Error 1
make[1]: Leaving directory '/install/nginx-1.13.9'
Makefile:8: recipe for target 'build' failed
make: *** [build] Error 2

previous version 1.13.8 and all before was build successfully with the same configure parameters.

also, i've found this post: https://www.coldawn.com/compile-nginx-on-centos-7-to-enable-tls13/
as suggested, after "configure" i've modified objs/Makefile: removed the first -lpthread and the second -lpthread moved to the end of the line. in my case it was the line #331:

before:
-ldl -lpthread -lpthread -lcrypt -lpcre /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl /install/zlib-1.2.11/libz.a \

after:
-ldl -lcrypt -lpcre /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl /install/zlib-1.2.11/libz.a -lpthread \

and then it builds successfully.
and also success when i'm using openssl-1.0.2n in configure parameters.

so the problem only occurs in combination nginx-1.13.9 + openssl-1.1.1-pre2

and my question is: someone would fix bug this in the next 1.13.10 or should we now always edit the makefile before compiling?
or this is not a bug and i'm just missing something?

Re: proxy_pass and trailing / decode uri

$
0
0
Hello!

On Mon, Mar 12, 2018 at 09:55:15AM +0100, max wrote:

> > When you want nginx to replace matching part of the URI with "/",
> > it will do so on the decoded/normalized URI, and will re-encode
> > special characters in what's left.
> >
> > If you want nginx to preserve original URI as sent by the client,
> > consider using proxy_pass without the URI part. That is,
> > instead of
> >
> > proxy_pass http://127.0.0.1:82/;
> >
> > use
> >
> > proxy_pass http://127.0.0.1:82;
> >
> > Note no trailing "/". This way the original URI as sent by the
> > client will be preserved without any modifications.
> >
>
>
> Thank you for your answer but it is not correct for location different than
> '/'. With your proposal, targeting http://domain1.com/api/foo/bar, socket
> on port 82 receives: /api/foo/bar. I guess the only way to remove the /api
> part is "rewrite" and involves re-encoding...

Whether this is correct or no depends on the particular setup - in
particular, it depends on what your backend expects as an URI. If
your backend is picky about specific forms of encoding, preserving
full original URI might be much better option than trying to
invent hacky workarounds like the one you've linked. Obviously
enough, this might either envolve re-configuring the backend to
accept full original URIs, or hosting things on a dedicated
domain.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass and trailing / decode uri

$
0
0
>
> Whether this is correct or no depends on the particular setup - in
> particular, it depends on what your backend expects as an URI. If
> your backend is picky about specific forms of encoding, preserving
> full original URI might be much better option than trying to
> invent hacky workarounds like the one you've linked. Obviously
> enough, this might either envolve re-configuring the backend to
> accept full original URIs, or hosting things on a dedicated
> domain.
>
Yes sub-domains would be the best option but I'm already listening on
sub-domain and do not want to make an other sub-level.
I cannot configure my backends (several backends) to be able to listen on
/api, this is exactly why I needed to use Nginx as reverse proxy.

2018-03-12 13:28 GMT+01:00 Maxim Dounin <mdounin@mdounin.ru>:

> Hello!
>
> On Mon, Mar 12, 2018 at 09:55:15AM +0100, max wrote:
>
> > > When you want nginx to replace matching part of the URI with "/",
> > > it will do so on the decoded/normalized URI, and will re-encode
> > > special characters in what's left.
> > >
> > > If you want nginx to preserve original URI as sent by the client,
> > > consider using proxy_pass without the URI part. That is,
> > > instead of
> > >
> > > proxy_pass http://127.0.0.1:82/;
> > >
> > > use
> > >
> > > proxy_pass http://127.0.0.1:82;
> > >
> > > Note no trailing "/". This way the original URI as sent by the
> > > client will be preserved without any modifications.
> > >
> >
> >
> > Thank you for your answer but it is not correct for location different
> than
> > '/'. With your proposal, targeting http://domain1.com/api/foo/bar,
> socket
> > on port 82 receives: /api/foo/bar. I guess the only way to remove the
> /api
> > part is "rewrite" and involves re-encoding...
>
> Whether this is correct or no depends on the particular setup - in
> particular, it depends on what your backend expects as an URI. If
> your backend is picky about specific forms of encoding, preserving
> full original URI might be much better option than trying to
> invent hacky workarounds like the one you've linked. Obviously
> enough, this might either envolve re-configuring the backend to
> accept full original URIs, or hosting things on a dedicated
> domain.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Planned Features for gRPC Proxy

$
0
0
Thanks for the reply. My team and I are eagerly awaiting these features. Is there a ticket I can follow to track their status? I wasn’t able to find one on the roadmap page.

Thanks,
-Ian

> On Mar 11, 2018, at 7:10 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
>
> Hello!
>
>> On Fri, Mar 09, 2018 at 11:50:03AM -0500, Ian McGraw wrote:
>>
>> Hi all,
>>
>> I am new to the nginx community so my apologies if this is not
>> the correct place for this kind of question.
>>
>> I see gRPC proxy is in progress for 1.13:
>> https://trac.nginx.org/nginx/roadmap
>>
>> Does anyone know if the proxy will support host/path based
>> routing for gRPC calls? I have a use case in Kubernetes where I
>> am trying to expose many gRPC microservices through a single
>> nginx ingress controller. I’m trying to find out if context
>> based routing will be supported so I can setup rules to be able
>> to proxy to different services.
>
> Yes, it will be possible to proxy to different backend servers
> based on normal server and location matching, much like it is
> possible with proxy_pass and fastcgi_pass.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: [PATCH] HTTP/2: make http2 server support http1

$
0
0
Is there any one who would like to review this patch?

发自我的 iPhone

> 在 2018年3月8日,08:42,Haitao Lv <i@lvht.net> 写道:
>
> Sorry for disturbing. But I have to fix a buffer overflow bug.
> Here is the latest patch.
>
> Sorry. But please make your comments. Thank you.
>
>
> diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
> index 89cfe77a..c51d8ace 100644
> --- a/src/http/ngx_http_request.c
> +++ b/src/http/ngx_http_request.c
> @@ -17,6 +17,10 @@ static ssize_t ngx_http_read_request_header(ngx_http_request_t *r);
> static ngx_int_t ngx_http_alloc_large_header_buffer(ngx_http_request_t *r,
> ngx_uint_t request_line);
>
> +#if (NGX_HTTP_V2)
> +static void ngx_http_wait_v2_preface_handler(ngx_event_t *rev);
> +#endif
> +
> static ngx_int_t ngx_http_process_header_line(ngx_http_request_t *r,
> ngx_table_elt_t *h, ngx_uint_t offset);
> static ngx_int_t ngx_http_process_unique_header_line(ngx_http_request_t *r,
> @@ -321,7 +325,7 @@ ngx_http_init_connection(ngx_connection_t *c)
>
> #if (NGX_HTTP_V2)
> if (hc->addr_conf->http2) {
> - rev->handler = ngx_http_v2_init;
> + rev->handler = ngx_http_wait_v2_preface_handler;
> }
> #endif
>
> @@ -377,6 +381,110 @@ ngx_http_init_connection(ngx_connection_t *c)
> }
>
>
> +#if (NGX_HTTP_V2)
> +static void
> +ngx_http_wait_v2_preface_handler(ngx_event_t *rev)
> +{
> + size_t size;
> + ssize_t n;
> + ngx_buf_t *b;
> + ngx_connection_t *c;
> + static const u_char preface[] = "PRI";
> +
> + c = rev->data;
> + size = sizeof(preface) - 1;
> +
> + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
> + "http wait h2 preface handler");
> +
> + if (rev->timedout) {
> + ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out");
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + if (c->close) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + b = c->buffer;
> +
> + if (b == NULL) {
> + b = ngx_create_temp_buf(c->pool, size);
> + if (b == NULL) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + c->buffer = b;
> +
> + } else if (b->start == NULL) {
> +
> + b->start = ngx_palloc(c->pool, size);
> + if (b->start == NULL) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + b->pos = b->start;
> + b->last = b->start;
> + b->end = b->last + size;
> + }
> +
> + n = c->recv(c, b->last, b->end - b->last);
> +
> + if (n == NGX_AGAIN) {
> +
> + if (!rev->timer_set) {
> + ngx_add_timer(rev, c->listening->post_accept_timeout);
> + ngx_reusable_connection(c, 1);
> + }
> +
> + if (ngx_handle_read_event(rev, 0) != NGX_OK) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + /*
> + * We are trying to not hold c->buffer's memory for an idle connection.
> + */
> +
> + if (ngx_pfree(c->pool, b->start) == NGX_OK) {
> + b->start = NULL;
> + }
> +
> + return;
> + }
> +
> + if (n == NGX_ERROR) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + if (n == 0) {
> + ngx_log_error(NGX_LOG_INFO, c->log, 0,
> + "client closed connection");
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + b->last += n;
> +
> + if (b->last == b->end) {
> + /* b will be freed in ngx_http_v2_init/ngx_http_wait_request_handler */
> +
> + if (ngx_strncmp(b->start, preface, size) == 0) {
> + ngx_http_v2_init(rev);
> + } else {
> + rev->handler = ngx_http_wait_request_handler;
> + ngx_http_wait_request_handler(rev);
> + }
> + }
> +}
> +#endif
> +
> +
> static void
> ngx_http_wait_request_handler(ngx_event_t *rev)
> {
> @@ -430,6 +538,22 @@ ngx_http_wait_request_handler(ngx_event_t *rev)
> b->pos = b->start;
> b->last = b->start;
> b->end = b->last + size;
> + } else {
> +
> + p = ngx_palloc(c->pool, size);
> + if (p == NULL) {
> + ngx_http_close_connection(c);
> + return;
> + }
> +
> + n = b->last - b->start;
> + ngx_memcpy(p, b->start, n);
> + ngx_pfree(c->pool, b->start);
> +
> + b->start = p;
> + b->pos = b->start;
> + b->last = b->start + n;
> + b->end = b->last + size;
> }
>
> n = c->recv(c, b->last, size);
> diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c
> index d9df0f90..e36bf382 100644
> --- a/src/http/v2/ngx_http_v2.c
> +++ b/src/http/v2/ngx_http_v2.c
> @@ -231,6 +231,8 @@ static ngx_http_v2_parse_header_t ngx_http_v2_parse_headers[] = {
> void
> ngx_http_v2_init(ngx_event_t *rev)
> {
> + size_t size;
> + ngx_buf_t *b;
> ngx_connection_t *c;
> ngx_pool_cleanup_t *cln;
> ngx_http_connection_t *hc;
> @@ -262,6 +264,23 @@ ngx_http_v2_init(ngx_event_t *rev)
> return;
> }
>
> + b = c->buffer;
> +
> + if (b != NULL) {
> + size = b->last - b->start;
> +
> + if (size > h2mcf->recv_buffer_size) {
> + size = h2mcf->recv_buffer_size;
> + }
> +
> + ngx_memcpy(h2mcf->recv_buffer, b->start, size);
> + h2c->state.buffer_used = size;
> +
> + ngx_pfree(c->pool, b->start);
> + ngx_pfree(c->pool, b);
> + c->buffer = NULL;
> + }
> +
> h2c->connection = c;
> h2c->http_connection = hc;
>
> @@ -381,13 +400,15 @@ ngx_http_v2_read_handler(ngx_event_t *rev)
> h2mcf = ngx_http_get_module_main_conf(h2c->http_connection->conf_ctx,
> ngx_http_v2_module);
>
> - available = h2mcf->recv_buffer_size - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;
> + available = h2mcf->recv_buffer_size - h2c->state.buffer_used - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;
>
> do {
> p = h2mcf->recv_buffer;
>
> - ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
> end = p + h2c->state.buffer_used;
> + if (h2c->state.buffer_used == 0) {
> + ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
> + }
>
> n = c->recv(c, end, available);
>
>
>
>
>> On Mar 6, 2018, at 03:14, Maxim Dounin <mdounin@mdounin.ru> wrote:
>>
>> Hello!
>>
>> On Mon, Mar 05, 2018 at 11:52:57PM +0800, Haitao Lv wrote:
>>
>> [...]
>>
>>>> Overall, the patch looks like a hack and introduces too much
>>>> complexity for this feature. While I understand the reasoning,
>>>> the proposed implementation cannot be accepted.
>>>
>>> Could you clarify that whether is this feature not accepted or this patch?
>>>
>>> If this feature is not needed, I will terminate this thread.
>>>
>>> If this patch only looks like a hack, would you like offer any advice to write
>>> code with good smell?
>>
>> We've previously discussed this with Valentin, and our position is
>> as follows:
>>
>> - The feature itself (autodetection between HTTP/2 and HTTP/1.x
>> protocols) might be usable, and we can consider adding it if
>> there will be a good and simple enough patch. (Moreover, we
>> think that this probably should be the default if "listen ...
>> http2" is configured - that is, no "http1" option.)
>>
>> - The patch suggested certainly doesn't meet the above criteria,
>> and it does not look like it can be fixed.
>>
>> We don't know if a good and simple enough implementation is at all
>> possible though. One of the possible approaches was already
>> proposed by Valentin (detect HTTP/2 or HTTP/1.x before starting
>> processing, may be similar to how we handle http-to-https
>> requests), but it's now immediately clear if it will work or not.
>> Sorry, but please don't expect any of us to provide further
>> guidance.
>>
>> --
>> Maxim Dounin
>> http://mdounin.ru/
>> _______________________________________________
>> nginx-devel mailing list
>> nginx-devel@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-devel




_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH] HTTP/2: make http2 server support http1

$
0
0
I will look when time permits.

But at the first glance the patch still look too complicated than it should be.

wbr, Valentin V. Bartenev


On Tuesday 13 March 2018 00:44:28 吕海涛 wrote:
> Is there any one who would like to review this patch?
>
> 发自我的 iPhone
>
> > 在 2018年3月8日,08:42,Haitao Lv <i@lvht.net> 写道:
> >
> > Sorry for disturbing. But I have to fix a buffer overflow bug.
> > Here is the latest patch.
> >
> > Sorry. But please make your comments. Thank you.
> >
> >
> > diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
> > index 89cfe77a..c51d8ace 100644
> > --- a/src/http/ngx_http_request.c
> > +++ b/src/http/ngx_http_request.c
> > @@ -17,6 +17,10 @@ static ssize_t ngx_http_read_request_header(ngx_http_request_t *r);
> > static ngx_int_t ngx_http_alloc_large_header_buffer(ngx_http_request_t *r,
> > ngx_uint_t request_line);
> >
> > +#if (NGX_HTTP_V2)
> > +static void ngx_http_wait_v2_preface_handler(ngx_event_t *rev);
> > +#endif
> > +
> > static ngx_int_t ngx_http_process_header_line(ngx_http_request_t *r,
> > ngx_table_elt_t *h, ngx_uint_t offset);
> > static ngx_int_t ngx_http_process_unique_header_line(ngx_http_request_t *r,
> > @@ -321,7 +325,7 @@ ngx_http_init_connection(ngx_connection_t *c)
> >
> > #if (NGX_HTTP_V2)
> > if (hc->addr_conf->http2) {
> > - rev->handler = ngx_http_v2_init;
> > + rev->handler = ngx_http_wait_v2_preface_handler;
> > }
> > #endif
> >
> > @@ -377,6 +381,110 @@ ngx_http_init_connection(ngx_connection_t *c)
> > }
> >
> >
> > +#if (NGX_HTTP_V2)
> > +static void
> > +ngx_http_wait_v2_preface_handler(ngx_event_t *rev)
> > +{
> > + size_t size;
> > + ssize_t n;
> > + ngx_buf_t *b;
> > + ngx_connection_t *c;
> > + static const u_char preface[] = "PRI";
> > +
> > + c = rev->data;
> > + size = sizeof(preface) - 1;
> > +
> > + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
> > + "http wait h2 preface handler");
> > +
> > + if (rev->timedout) {
> > + ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed out");
> > + ngx_http_close_connection(c);
> > + return;
> > + }
> > +
> > + if (c->close) {
> > + ngx_http_close_connection(c);
> > + return;
> > + }
> > +
> > + b = c->buffer;
> > +
> > + if (b == NULL) {
> > + b = ngx_create_temp_buf(c->pool, size);
> > + if (b == NULL) {
> > + ngx_http_close_connection(c);
> > + return;
> > + }
> > +
> > + c->buffer = b;
> > +
> > + } else if (b->start == NULL) {
> > +
> > + b->start = ngx_palloc(c->pool, size);
> > + if (b->start == NULL) {
> > + ngx_http_close_connection(c);
> > + return;
> > + }
> > +
> > + b->pos = b->start;
> > + b->last = b->start;
> > + b->end = b->last + size;
> > + }
> > +
> > + n = c->recv(c, b->last, b->end - b->last);
> > +
> > + if (n == NGX_AGAIN) {
> > +
> > + if (!rev->timer_set) {
> > + ngx_add_timer(rev, c->listening->post_accept_timeout);
> > + ngx_reusable_connection(c, 1);
> > + }
> > +
> > + if (ngx_handle_read_event(rev, 0) != NGX_OK) {
> > + ngx_http_close_connection(c);
> > + return;
> > + }
> > +
> > + /*
> > + * We are trying to not hold c->buffer's memory for an idle connection.
> > + */
> > +
> > + if (ngx_pfree(c->pool, b->start) == NGX_OK) {
> > + b->start = NULL;
> > + }
> > +
> > + return;
> > + }
> > +
> > + if (n == NGX_ERROR) {
> > + ngx_http_close_connection(c);
> > + return;
> > + }
> > +
> > + if (n == 0) {
> > + ngx_log_error(NGX_LOG_INFO, c->log, 0,
> > + "client closed connection");
> > + ngx_http_close_connection(c);
> > + return;
> > + }
> > +
> > + b->last += n;
> > +
> > + if (b->last == b->end) {
> > + /* b will be freed in ngx_http_v2_init/ngx_http_wait_request_handler */
> > +
> > + if (ngx_strncmp(b->start, preface, size) == 0) {
> > + ngx_http_v2_init(rev);
> > + } else {
> > + rev->handler = ngx_http_wait_request_handler;
> > + ngx_http_wait_request_handler(rev);
> > + }
> > + }
> > +}
> > +#endif
> > +
> > +
> > static void
> > ngx_http_wait_request_handler(ngx_event_t *rev)
> > {
> > @@ -430,6 +538,22 @@ ngx_http_wait_request_handler(ngx_event_t *rev)
> > b->pos = b->start;
> > b->last = b->start;
> > b->end = b->last + size;
> > + } else {
> > +
> > + p = ngx_palloc(c->pool, size);
> > + if (p == NULL) {
> > + ngx_http_close_connection(c);
> > + return;
> > + }
> > +
> > + n = b->last - b->start;
> > + ngx_memcpy(p, b->start, n);
> > + ngx_pfree(c->pool, b->start);
> > +
> > + b->start = p;
> > + b->pos = b->start;
> > + b->last = b->start + n;
> > + b->end = b->last + size;
> > }
> >
> > n = c->recv(c, b->last, size);
> > diff --git a/src/http/v2/ngx_http_v2.c b/src/http/v2/ngx_http_v2.c
> > index d9df0f90..e36bf382 100644
> > --- a/src/http/v2/ngx_http_v2.c
> > +++ b/src/http/v2/ngx_http_v2.c
> > @@ -231,6 +231,8 @@ static ngx_http_v2_parse_header_t ngx_http_v2_parse_headers[] = {
> > void
> > ngx_http_v2_init(ngx_event_t *rev)
> > {
> > + size_t size;
> > + ngx_buf_t *b;
> > ngx_connection_t *c;
> > ngx_pool_cleanup_t *cln;
> > ngx_http_connection_t *hc;
> > @@ -262,6 +264,23 @@ ngx_http_v2_init(ngx_event_t *rev)
> > return;
> > }
> >
> > + b = c->buffer;
> > +
> > + if (b != NULL) {
> > + size = b->last - b->start;
> > +
> > + if (size > h2mcf->recv_buffer_size) {
> > + size = h2mcf->recv_buffer_size;
> > + }
> > +
> > + ngx_memcpy(h2mcf->recv_buffer, b->start, size);
> > + h2c->state.buffer_used = size;
> > +
> > + ngx_pfree(c->pool, b->start);
> > + ngx_pfree(c->pool, b);
> > + c->buffer = NULL;
> > + }
> > +
> > h2c->connection = c;
> > h2c->http_connection = hc;
> >
> > @@ -381,13 +400,15 @@ ngx_http_v2_read_handler(ngx_event_t *rev)
> > h2mcf = ngx_http_get_module_main_conf(h2c->http_connection->conf_ctx,
> > ngx_http_v2_module);
> >
> > - available = h2mcf->recv_buffer_size - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;
> > + available = h2mcf->recv_buffer_size - h2c->state.buffer_used - 2 * NGX_HTTP_V2_STATE_BUFFER_SIZE;
> >
> > do {
> > p = h2mcf->recv_buffer;
> >
> > - ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
> > end = p + h2c->state.buffer_used;
> > + if (h2c->state.buffer_used == 0) {
> > + ngx_memcpy(p, h2c->state.buffer, NGX_HTTP_V2_STATE_BUFFER_SIZE);
> > + }
> >
> > n = c->recv(c, end, available);
> >
> >
> >
> >
> >> On Mar 6, 2018, at 03:14, Maxim Dounin <mdounin@mdounin.ru> wrote:
> >>
> >> Hello!
> >>
> >> On Mon, Mar 05, 2018 at 11:52:57PM +0800, Haitao Lv wrote:
> >>
> >> [...]
> >>
> >>>> Overall, the patch looks like a hack and introduces too much
> >>>> complexity for this feature. While I understand the reasoning,
> >>>> the proposed implementation cannot be accepted.
> >>>
> >>> Could you clarify that whether is this feature not accepted or this patch?
> >>>
> >>> If this feature is not needed, I will terminate this thread.
> >>>
> >>> If this patch only looks like a hack, would you like offer any advice to write
> >>> code with good smell?
> >>
> >> We've previously discussed this with Valentin, and our position is
> >> as follows:
> >>
> >> - The feature itself (autodetection between HTTP/2 and HTTP/1.x
> >> protocols) might be usable, and we can consider adding it if
> >> there will be a good and simple enough patch. (Moreover, we
> >> think that this probably should be the default if "listen ...
> >> http2" is configured - that is, no "http1" option.)
> >>
> >> - The patch suggested certainly doesn't meet the above criteria,
> >> and it does not look like it can be fixed.
> >>
> >> We don't know if a good and simple enough implementation is at all
> >> possible though. One of the possible approaches was already
> >> proposed by Valentin (detect HTTP/2 or HTTP/1.x before starting
> >> processing, may be similar to how we handle http-to-https
> >> requests), but it's now immediately clear if it will work or not.
> >> Sorry, but please don't expect any of us to provide further
> >> guidance.
> >>
>
>
>
>
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Nginx as reverse web proxy changes all to apache default page.

$
0
0
I'm very confused. I am using Nginx as a reverse web proxy in VM environment with 4 VM web servers. I have 4 conf files directing to each site. The last 2 mornings I have found that all 4 sites are defaulted to an apache start page. There is no apache on the Nginx machine so I assume it's showing the apache page from one of the 4 servers, I'm assuming it's the one listed as default in the conf files.

I have to reboot my router and the Nginx machine to get it to come back. I am using an IPCOP open source router.

Any ideas why this is happening?

Nginx phpmyadmin redirecting to homepage

$
0
0
Hello, I have been struggling to find a solution to this and could uses some help please. I created a webserver using wordpress and added phpmyadmin. I am able to login to phpmyadmin and created a symbolic link however, it redirects me to the main page. This is a website that I made to try and learn about web development. (If you see anything else wrong with the config file, please point it out.)

The address bar displays: https://example.com/?token=a token is here.

The address I need to access is https://example.com/newsymboliclink or
https://example.com/newsymboliclink/?token=a token is here. or
https://example.com/newsymboliclink/index.php?token=a token is here.
(Im not sure which one is the best one to use).

Ive been trying to due try_files and returns but can figure that out. Hoping someone can help.


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The nginx configuration is:

# HTTP SERVER

server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request;
}

server {
listen 443 ssl http2;
server_name example.com www.example.com;
root /var/www/example.com/html;
index index.php;

access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

# enable session resumption to improve https performance
# http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;

# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

# enables server-side protection from BEAST attacks
# http://blog.ivanristic.com/2013/09/is-beast-still-a-threat.html
ssl_prefer_server_ciphers on;

# disable SSLv3(enabled by default since nginx 0.8.19) since it's less secure then $
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

# ciphers chosen for forward secrecy and compatibility
# http://blog.ivanristic.com/2013/08/configuring-apache-nginx-and-openssl-for-forwar$
ssl_ciphers 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESG$

# config to enable HSTS(HTTP Strict Transport Security) https://developer.mozilla.or$
# to avoid ssl stripping https://en.wikipedia.org/wiki/SSL_stripping#SSL_stripping
# also https://hstspreload.org/
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as$
location ~* /(?:uploads|files)/.*\.php$ {
deny all;
}

# WORDPRESS PERMALINKS
location / {
try_files $uri $uri/ /index.php?$args;
}

location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm-giga.sock;
}

# HTACCESS DENY ALL RULE

location ~/\.ht {
deny all;
}
}

Re: [PATCH] HTTP/2: don't limit number of requests per HTTP/2 connection

$
0
0
Hey,
just as reminder, limiting HTTP/2 connections to 1000 requests without
graceful shutdown via 2-stage GOAWAY is still an issue and while this
might work with browsers, you're going to break gRPC-based
microservices proxied via NGINX pretty badly, so you should either
implement graceful shutdown or stop limiting number of requests by
default.

Best regards,
Piotr Sikora

On Wed, Aug 30, 2017 at 4:14 PM, Piotr Sikora <piotrsikora@google.com> wrote:
> Hey Valentin,
>
>> This opens a vector for dos attack. There are some configurations
>> when memory can be allocated from connection pool for each request.
>> Removing a reasonable enough limit for requests per connection
>> potentially allow an attacker to grow this pool until a worker
>> process will be killed due to OOM.
>>
>> The problem should be solved by introducing "lingering close",
>> similar to HTTP/1.x.
>
> Yes, the proper solution is graceful shutdown via 2-stage GOAWAY,
> as defined in RFC7540, Section 6.8, but I don't have capacity to
> work on it now, and above patch is IMHO better than lost requests.
>
> Best regards,
> Piotr Sikora
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: 1.13.9 compile errors

$
0
0
Hello!

On Mon, Mar 12, 2018 at 06:32:23AM -0400, Evgenij Krupchenko wrote:

[...]

> in the end of "make" i got this:
>
> objs/ngx_modules.o \
> -ldl -lpthread -lpthread -lcrypt -lpcre
> /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a
> /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl
> /install/zlib-1.2.11/libz.a \
> -Wl,-E
> /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a(threads_pthread.o): In
> function `fork_once_func':
> threads_pthread.c:(.text+0x16): undefined reference to `pthread_atfork'
> collect2: error: ld returned 1 exit status
> objs/Makefile:223: recipe for target 'objs/nginx' failed
> make[1]: *** [objs/nginx] Error 1
> make[1]: Leaving directory '/install/nginx-1.13.9'
> Makefile:8: recipe for target 'build' failed
> make: *** [build] Error 2
>
> previous version 1.13.8 and all before was build successfully with the same
> configure parameters.
>
> also, i've found this post:
> https://www.coldawn.com/compile-nginx-on-centos-7-to-enable-tls13/
> as suggested, after "configure" i've modified objs/Makefile: removed the
> first -lpthread and the second -lpthread moved to the end of the line. in my
> case it was the line #331:
>
> before:
> -ldl -lpthread -lpthread -lcrypt -lpcre
> /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a
> /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl
> /install/zlib-1.2.11/libz.a \
>
> after:
> -ldl -lcrypt -lpcre /install/openssl-1.1.1-pre2/.openssl/lib/libssl.a
> /install/openssl-1.1.1-pre2/.openssl/lib/libcrypto.a -ldl
> /install/zlib-1.2.11/libz.a -lpthread \
>
> and then it builds successfully.
> and also success when i'm using openssl-1.0.2n in configure parameters.
>
> so the problem only occurs in combination nginx-1.13.9 + openssl-1.1.1-pre2
>
> and my question is: someone would fix bug this in the next 1.13.10 or should
> we now always edit the makefile before compiling?
> or this is not a bug and i'm just missing something?

The problem is that OpenSSL 1.1.1-pre2 requires -lpthread for
static linking on Linux. This wasn't the case with previous
OpenSSL versions, hence nginx doesn't try to provide -lpthread for
it. The same problem will occur with any nginx version when
trying to compile with OpenSSL 1.1.1-pre2.

The following patch should fix this, please test if it works for
you:

# HG changeset patch
# User Maxim Dounin <mdounin@mdounin.ru>
# Date 1520919437 -10800
# Tue Mar 13 08:37:17 2018 +0300
# Node ID 649427794a74c74eca80c942477d893678fb6036
# Parent 0b1eb40de6da32196b21d1ed086f7030c10b40d2
Configure: fixed static compilation with OpenSSL 1.1.1-pre2.

OpenSSL now uses pthread_atfork(), and this requires -lpthread on Linux.
Introduced NGX_LIBPTHREAD to add it as appropriate, similar to existing
NGX_LIBDL.

diff -r 0b1eb40de6da -r 649427794a74 auto/lib/openssl/conf
--- a/auto/lib/openssl/conf Wed Mar 07 18:28:12 2018 +0300
+++ b/auto/lib/openssl/conf Tue Mar 13 08:37:17 2018 +0300
@@ -41,6 +41,7 @@
CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libssl.a"
CORE_LIBS="$CORE_LIBS $OPENSSL/.openssl/lib/libcrypto.a"
CORE_LIBS="$CORE_LIBS $NGX_LIBDL"
+ CORE_LIBS="$CORE_LIBS $NGX_LIBPTHREAD"

if [ "$NGX_PLATFORM" = win32 ]; then
CORE_LIBS="$CORE_LIBS -lgdi32 -lcrypt32 -lws2_32"
@@ -59,7 +60,7 @@
ngx_feature_run=no
ngx_feature_incs="#include <openssl/ssl.h>"
ngx_feature_path=
- ngx_feature_libs="-lssl -lcrypto $NGX_LIBDL"
+ ngx_feature_libs="-lssl -lcrypto $NGX_LIBDL $NGX_LIBPTHREAD"
ngx_feature_test="SSL_CTX_set_options(NULL, 0)"
. auto/feature

@@ -71,11 +72,13 @@
ngx_feature_path="/usr/local/include"

if [ $NGX_RPATH = YES ]; then
- ngx_feature_libs="-R/usr/local/lib -L/usr/local/lib -lssl -lcrypto $NGX_LIBDL"
+ ngx_feature_libs="-R/usr/local/lib -L/usr/local/lib -lssl -lcrypto"
else
- ngx_feature_libs="-L/usr/local/lib -lssl -lcrypto $NGX_LIBDL"
+ ngx_feature_libs="-L/usr/local/lib -lssl -lcrypto"
fi

+ ngx_feature_libs="$ngx_feature_libs $NGX_LIBDL $NGX_LIBPTHREAD"
+
. auto/feature
fi

@@ -87,11 +90,13 @@
ngx_feature_path="/usr/pkg/include"

if [ $NGX_RPATH = YES ]; then
- ngx_feature_libs="-R/usr/pkg/lib -L/usr/pkg/lib -lssl -lcrypto $NGX_LIBDL"
+ ngx_feature_libs="-R/usr/pkg/lib -L/usr/pkg/lib -lssl -lcrypto"
else
- ngx_feature_libs="-L/usr/pkg/lib -lssl -lcrypto $NGX_LIBDL"
+ ngx_feature_libs="-L/usr/pkg/lib -lssl -lcrypto"
fi

+ ngx_feature_libs="$ngx_feature_libs $NGX_LIBDL $NGX_LIBPTHREAD"
+
. auto/feature
fi

@@ -103,11 +108,13 @@
ngx_feature_path="/opt/local/include"

if [ $NGX_RPATH = YES ]; then
- ngx_feature_libs="-R/opt/local/lib -L/opt/local/lib -lssl -lcrypto $NGX_LIBDL"
+ ngx_feature_libs="-R/opt/local/lib -L/opt/local/lib -lssl -lcrypto"
else
- ngx_feature_libs="-L/opt/local/lib -lssl -lcrypto $NGX_LIBDL"
+ ngx_feature_libs="-L/opt/local/lib -lssl -lcrypto"
fi

+ ngx_feature_libs="$ngx_feature_libs $NGX_LIBDL $NGX_LIBPTHREAD"
+
. auto/feature
fi

diff -r 0b1eb40de6da -r 649427794a74 auto/unix
--- a/auto/unix Wed Mar 07 18:28:12 2018 +0300
+++ b/auto/unix Tue Mar 13 08:37:17 2018 +0300
@@ -901,6 +901,7 @@

if [ $ngx_found = yes ]; then
CORE_LIBS="$CORE_LIBS -lpthread"
+ NGX_LIBPTHREAD="-lpthread"
fi
fi


--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx: [emerg] could not build test_types_hash

$
0
0
Hi,

I am using nginx on CentOs7. When I am using gzip with "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" MIME type, it is giving me below error:

Mar 13 14:57:47 localhost.localdomain nginx[17289]: nginx: [emerg] could not build test_types_hash, you should increase test_types_hash_bucket_size: 64
Mar 13 14:57:47 localhost.localdomain nginx[17289]: nginx: configuration file /etc/nginx/nginx.conf test failed

Attached is my nginx conf:

[nginx] Stream ssl_preread: $ssl_preread_alpn_protocols variable.

$
0
0
details: http://hg.nginx.org/nginx/rev/79eb4f7b6725
branches:
changeset: 7227:79eb4f7b6725
user: Roman Arutyunyan <arut@nginx.com>
date: Mon Mar 12 16:03:08 2018 +0300
description:
Stream ssl_preread: $ssl_preread_alpn_protocols variable.

The variable keeps a comma-separated list of protocol names from ALPN TLS
extension defined by RFC 7301.

diffstat:

src/stream/ngx_stream_ssl_preread_module.c | 128 +++++++++++++++++++++++++++-
1 files changed, 122 insertions(+), 6 deletions(-)

diffs (248 lines):

diff -r 0b1eb40de6da -r 79eb4f7b6725 src/stream/ngx_stream_ssl_preread_module.c
--- a/src/stream/ngx_stream_ssl_preread_module.c Wed Mar 07 18:28:12 2018 +0300
+++ b/src/stream/ngx_stream_ssl_preread_module.c Mon Mar 12 16:03:08 2018 +0300
@@ -17,10 +17,12 @@ typedef struct {
typedef struct {
size_t left;
size_t size;
+ size_t ext;
u_char *pos;
u_char *dst;
u_char buf[4];
ngx_str_t host;
+ ngx_str_t alpn;
ngx_log_t *log;
ngx_pool_t *pool;
ngx_uint_t state;
@@ -32,6 +34,8 @@ static ngx_int_t ngx_stream_ssl_preread_
ngx_stream_ssl_preread_ctx_t *ctx, u_char *pos, u_char *last);
static ngx_int_t ngx_stream_ssl_preread_server_name_variable(
ngx_stream_session_t *s, ngx_stream_variable_value_t *v, uintptr_t data);
+static ngx_int_t ngx_stream_ssl_preread_alpn_protocols_variable(
+ ngx_stream_session_t *s, ngx_stream_variable_value_t *v, uintptr_t data);
static ngx_int_t ngx_stream_ssl_preread_add_variables(ngx_conf_t *cf);
static void *ngx_stream_ssl_preread_create_srv_conf(ngx_conf_t *cf);
static char *ngx_stream_ssl_preread_merge_srv_conf(ngx_conf_t *cf, void *parent,
@@ -85,6 +89,9 @@ static ngx_stream_variable_t ngx_stream
{ ngx_string("ssl_preread_server_name"), NULL,
ngx_stream_ssl_preread_server_name_variable, 0, 0, 0 },

+ { ngx_string("ssl_preread_alpn_protocols"), NULL,
+ ngx_stream_ssl_preread_alpn_protocols_variable, 0, 0, 0 },
+
ngx_stream_null_variable
};

@@ -139,12 +146,14 @@ ngx_stream_ssl_preread_handler(ngx_strea
if (p[0] != 0x16) {
ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
"ssl preread: not a handshake");
+ ngx_stream_set_ctx(s, NULL, ngx_stream_ssl_preread_module);
return NGX_DECLINED;
}

if (p[1] != 3) {
ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
"ssl preread: unsupported SSL version");
+ ngx_stream_set_ctx(s, NULL, ngx_stream_ssl_preread_module);
return NGX_DECLINED;
}

@@ -158,6 +167,12 @@ ngx_stream_ssl_preread_handler(ngx_strea
p += 5;

rc = ngx_stream_ssl_preread_parse_record(ctx, p, p + len);
+
+ if (rc == NGX_DECLINED) {
+ ngx_stream_set_ctx(s, NULL, ngx_stream_ssl_preread_module);
+ return NGX_DECLINED;
+ }
+
if (rc != NGX_AGAIN) {
return rc;
}
@@ -175,7 +190,7 @@ static ngx_int_t
ngx_stream_ssl_preread_parse_record(ngx_stream_ssl_preread_ctx_t *ctx,
u_char *pos, u_char *last)
{
- size_t left, n, size;
+ size_t left, n, size, ext;
u_char *dst, *p;

enum {
@@ -192,7 +207,10 @@ ngx_stream_ssl_preread_parse_record(ngx_
sw_ext_header, /* extension_type, extension_data length */
sw_sni_len, /* SNI length */
sw_sni_host_head, /* SNI name_type, host_name length */
- sw_sni_host /* SNI host_name */
+ sw_sni_host, /* SNI host_name */
+ sw_alpn_len, /* ALPN length */
+ sw_alpn_proto_len, /* ALPN protocol_name length */
+ sw_alpn_proto_data /* ALPN protocol_name */
} state;

ngx_log_debug2(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
@@ -201,6 +219,7 @@ ngx_stream_ssl_preread_parse_record(ngx_
state = ctx->state;
size = ctx->size;
left = ctx->left;
+ ext = ctx->ext;
dst = ctx->dst;
p = ctx->buf;

@@ -299,10 +318,18 @@ ngx_stream_ssl_preread_parse_record(ngx_
break;

case sw_ext_header:
- if (p[0] == 0 && p[1] == 0) {
+ if (p[0] == 0 && p[1] == 0 && ctx->host.data == NULL) {
/* SNI extension */
state = sw_sni_len;
- dst = NULL;
+ dst = p;
+ size = 2;
+ break;
+ }
+
+ if (p[0] == 0 && p[1] == 16 && ctx->alpn.data == NULL) {
+ /* ALPN extension */
+ state = sw_alpn_len;
+ dst = p;
size = 2;
break;
}
@@ -313,6 +340,7 @@ ngx_stream_ssl_preread_parse_record(ngx_
break;

case sw_sni_len:
+ ext = (p[0] << 8) + p[1];
state = sw_sni_host_head;
dst = p;
size = 3;
@@ -325,14 +353,21 @@ ngx_stream_ssl_preread_parse_record(ngx_
return NGX_DECLINED;
}

- state = sw_sni_host;
size = (p[1] << 8) + p[2];

+ if (ext < 3 + size) {
+ ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
+ "ssl preread: SNI format error");
+ return NGX_DECLINED;
+ }
+ ext -= 3 + size;
+
ctx->host.data = ngx_pnalloc(ctx->pool, size);
if (ctx->host.data == NULL) {
return NGX_ERROR;
}

+ state = sw_sni_host;
dst = ctx->host.data;
break;

@@ -341,7 +376,64 @@ ngx_stream_ssl_preread_parse_record(ngx_

ngx_log_debug1(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
"ssl preread: SNI hostname \"%V\"", &ctx->host);
- return NGX_OK;
+
+ state = sw_ext;
+ dst = NULL;
+ size = ext;
+ break;
+
+ case sw_alpn_len:
+ ext = (p[0] << 8) + p[1];
+
+ ctx->alpn.data = ngx_pnalloc(ctx->pool, ext);
+ if (ctx->alpn.data == NULL) {
+ return NGX_ERROR;
+ }
+
+ state = sw_alpn_proto_len;
+ dst = p;
+ size = 1;
+ break;
+
+ case sw_alpn_proto_len:
+ size = p[0];
+
+ if (size == 0) {
+ ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
+ "ssl preread: ALPN empty protocol");
+ return NGX_DECLINED;
+ }
+
+ if (ext < 1 + size) {
+ ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
+ "ssl preread: ALPN format error");
+ return NGX_DECLINED;
+ }
+ ext -= 1 + size;
+
+ state = sw_alpn_proto_data;
+ dst = ctx->alpn.data + ctx->alpn.len;
+ break;
+
+ case sw_alpn_proto_data:
+ ctx->alpn.len += p[0];
+
+ ngx_log_debug1(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
+ "ssl preread: ALPN protocols \"%V\"", &ctx->alpn);
+
+ if (ext) {
+ ctx->alpn.data[ctx->alpn.len++] = ',';
+
+ state = sw_alpn_proto_len;
+ dst = p;
+ size = 1;
+ break;
+ }
+
+ state = sw_ext;
+ dst = NULL;
+ size = 0;
+ break;
}

if (left < size) {
@@ -354,6 +446,7 @@ ngx_stream_ssl_preread_parse_record(ngx_
ctx->state = state;
ctx->size = size;
ctx->left = left;
+ ctx->ext = ext;
ctx->dst = dst;

return NGX_AGAIN;
@@ -384,6 +477,29 @@ ngx_stream_ssl_preread_server_name_varia


static ngx_int_t
+ngx_stream_ssl_preread_alpn_protocols_variable(ngx_stream_session_t *s,
+ ngx_variable_value_t *v, uintptr_t data)
+{
+ ngx_stream_ssl_preread_ctx_t *ctx;
+
+ ctx = ngx_stream_get_module_ctx(s, ngx_stream_ssl_preread_module);
+
+ if (ctx == NULL) {
+ v->not_found = 1;
+ return NGX_OK;
+ }
+
+ v->valid = 1;
+ v->no_cacheable = 0;
+ v->not_found = 0;
+ v->len = ctx->alpn.len;
+ v->data = ctx->alpn.data;
+
+ return NGX_OK;
+}
+
+
+static ngx_int_t
ngx_stream_ssl_preread_add_variables(ngx_conf_t *cf)
{
ngx_stream_variable_t *var, *v;
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[nginx] Style.

$
0
0
details: http://hg.nginx.org/nginx/rev/0f811890f2f0
branches:
changeset: 7228:0f811890f2f0
user: Roman Arutyunyan <arut@nginx.com>
date: Mon Mar 12 18:38:53 2018 +0300
description:
Style.

diffstat:

src/stream/ngx_stream_ssl_preread_module.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diffs (16 lines):

diff -r 79eb4f7b6725 -r 0f811890f2f0 src/stream/ngx_stream_ssl_preread_module.c
--- a/src/stream/ngx_stream_ssl_preread_module.c Mon Mar 12 16:03:08 2018 +0300
+++ b/src/stream/ngx_stream_ssl_preread_module.c Mon Mar 12 18:38:53 2018 +0300
@@ -437,9 +437,9 @@ ngx_stream_ssl_preread_parse_record(ngx_
}

if (left < size) {
- ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
- "ssl preread: failed to parse handshake");
- return NGX_DECLINED;
+ ngx_log_debug0(NGX_LOG_DEBUG_STREAM, ctx->log, 0,
+ "ssl preread: failed to parse handshake");
+ return NGX_DECLINED;
}
}

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>