Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Re[2]: Непонятки с ответом 400

$
0
0
Ладно, с этим разберусь.
Еще толику Вашего времени... Не совсем в тему, но почти. О выборе секции server для обработки запроса.

Я слегка запутался, что от чего зависит: $host от $server_name или наоборот?
Вот как я это понимаю.

1. Сначала неправильный запрос:
echo -e 'HEAD http://www.other-domain.com/some-path HTTP/1.1\n''host:www.my-domain.com\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
Как все происходит (ИМХО):
1.1. Получаем значение $host из строки запроса: $host = www.other-domain.com
На заголовок ($http_host = www.my-domain.com) в данном случае не смотрим.
1.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
1.3. Такой секции не существует, запрос передается в дефолтовую, и получаем $server_name = _

----------------------------------------------------
2. Теперь правильный запрос:
echo -e 'HEAD / HTTP/1.1\n''host:www.my-domain.com\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
2.1. В строке запроса хоста нет, берем из заголовка ($http_host = www.my-domain.com).
Получаем значение $host из $http_host: $host= www.my-domain.com
2.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
2.3. Передаем в нее запрос и получаем $server_name = www.my-domain.com

----------------------------------------------------
3. Опять неправильный запрос с пустым $http_host:
echo -e 'HEAD / HTTP/1.1\n''host:\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
3.1. Значения $host = '' и $http_host = ''
3.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
3.3. Такой секции не существует, запрос передается в дефолтовую, и получаем $server_name = _
3.4. $host получает значение $server_name, т.е. $host = _
Т.е., в отличие от примера 2, не $server_name получаем из $host, а $host из $server_name

Я верно понимаю алгоритм?

>Понедельник, 20 ноября 2017, 16:24 +03:00 от Maxim Dounin <mdounin@mdounin.ru>:
>
>Hello!
>
>On Mon, Nov 20, 2017 at 03:43:16PM +0300, CoDDoC wrote:
>
>> Это я понял. Бот дернул запрос и быстро сбежал, чтобы не попасть в бан. Однако-же попал :)
>> Как мне эмулировать такую ситуацию?
>
>Я, вроде бы, вполне однозначно написал:
>
>> > Если клиент закрыл соединение, не прислав запрос полностью -
>> > то ...
>
>Так и эмулировать - закрывать соединение, не прислав запрос
>полностью.
>
>[...]
>
>--
>Maxim Dounin
>http://mdounin.ru/
>_______________________________________________
>nginx-ru mailing list
>nginx-ru@nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx-ru


--
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS

$
0
0
I am trying to use nginx as a reverse proxy with upstream SSL. For this, I am using the below directive in the nginx configuration file

proxy_pass https://<upstream_block_file_name>;

where "<upstream_block_file_name>" is another file which has the list of upstream servers.

upstream <upstream_block_file_name> {
server <IP_address_of_upstream_server>:<Port> weight=1;
keepalive 100;
}

With this configuration if I try to reload the Nginx configuration, it fails intermittently with the below error message

nginx: [emerg] host not found in upstream \"<upstream_block_file_name>\"

However, if I changed the protocol mentioned in the proxy_pass directive from https to http, then the reload goes through.

Could anyone please explain what mistake I might be doing here?

Thanks in advance.

Re: Re[2]: Непонятки с ответом 400

$
0
0
On Mon, Nov 20, 2017 at 04:43:05PM +0300, CoDDoC wrote:

> Ладно, с этим разберусь.
> Еще толику Вашего времени... Не совсем в тему, но почти. О выборе секции server для обработки запроса.
>
> Я слегка запутался, что от чего зависит: $host от $server_name или наоборот?
> Вот как я это понимаю.
>
> 1. Сначала неправильный запрос:
> echo -e 'HEAD http://www.other-domain.com/some-path HTTP/1.1\n''host:www.my-domain.com\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
> Как все происходит (ИМХО):
> 1.1. Получаем значение $host из строки запроса: $host = www.other-domain.com
> На заголовок ($http_host = www.my-domain.com) в данном случае не смотрим.

так может делать только прокся (причем прямая, а не реверсивная), для www-сервера это некорректный
запрос. отвечать 500 или 400, секция нафиг.

_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Непонятки с ответом 400

$
0
0
Hello!

On Mon, Nov 20, 2017 at 04:43:05PM +0300, CoDDoC wrote:

> Ладно, с этим разберусь.
> Еще толику Вашего времени... Не совсем в тему, но почти. О выборе секции server для обработки запроса.
>
> Я слегка запутался, что от чего зависит: $host от $server_name или наоборот?
> Вот как я это понимаю.
>
> 1. Сначала неправильный запрос:
> echo -e 'HEAD http://www.other-domain.com/some-path HTTP/1.1\n''host:www.my-domain.com\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
> Как все происходит (ИМХО):
> 1.1. Получаем значение $host из строки запроса: $host = www.other-domain.com
> На заголовок ($http_host = www.my-domain.com) в данном случае не смотрим.
> 1.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
> 1.3. Такой секции не существует, запрос передается в дефолтовую, и получаем $server_name = _
>
> ----------------------------------------------------
> 2. Теперь правильный запрос:
> echo -e 'HEAD / HTTP/1.1\n''host:www.my-domain.com\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
> 2.1. В строке запроса хоста нет, берем из заголовка ($http_host = www.my-domain.com).
> Получаем значение $host из $http_host: $host= www.my-domain.com
> 2.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
> 2.3. Передаем в нее запрос и получаем $server_name = www.my-domain.com
>
> ----------------------------------------------------
> 3. Опять неправильный запрос с пустым $http_host:
> echo -e 'HEAD / HTTP/1.1\n''host:\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
> 3.1. Значения $host = '' и $http_host = ''
> 3.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
> 3.3. Такой секции не существует, запрос передается в дефолтовую, и получаем $server_name = _
> 3.4. $host получает значение $server_name, т.е. $host = _
> Т.е., в отличие от примера 2, не $server_name получаем из $host, а $host из $server_name
>
> Я верно понимаю алгоритм?

Да, как-то так. Если в строке запроса используется полный адрес,
то $host берётся оттуда. Иначе - из заголовка Host. Если
заголовок Host отсутствует или пустой - будет использовано имя
сервера, которое также доступно в переменной $server_name.

Документация тут:

http://nginx.org/ru/docs/http/ngx_http_core_module.html#var_host
http://nginx.org/ru/docs/http/ngx_http_core_module.html#server_name

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Issue with flooded warning and request limiting

$
0
0
Thank you very much for clearing this out. All I need to do is
"limit_req_log_level warn;" and then I see limits as warn-logs and delaying
as info, and hence I only view warn+ levels, it is omitted from the logfile
completely.

---

Med venlig hilsen / Best Regards
Stephan Ryer Møller
Partner & CTO

inMobile ApS
Axel Kiers Vej 18L
DK-8270 Højbjerg

Dir. +45 82 82 66 92
E-mail: sr@inmobile.dk

Web: www.inmobile.dk
Tel: +45 88 33 66 99

2017-11-20 14:01 GMT+01:00 Maxim Dounin <mdounin@mdounin.ru>:

> Hello!
>
> On Mon, Nov 20, 2017 at 11:33:26AM +0100, Stephan Ryer wrote:
>
> > We are using nginx as a proxy server in front of our IIS servers.
> >
> > We have a client who needs to call us up to 200 times per second. Due to
> > the roundtrip-time, 16 simultanious connections are opened from the
> client
> > and each connection is used independently to send a https request, wait
> for
> > x ms and then send again.
> >
> > I have been doing some tests and looked into the throttle logic in the
> > nginx-code. It seems that when setting request limit to 200/sec it is
> > actually interpreted as “minimum 5ms per call” in the code. If we
> receive 2
> > calls at the same time, the warning log will show an “excess”-message and
> > the call will be delayed to ensure a minimum of 5ms between the calls..
> > (and if no burst is set, it will be an error message in the log and an
> > error will be returned to the client)
> >
> > We have set burst to 20 meaning, that when our client only sends 1
> request
> > at a time per connection, he will never get an error reply from nginx,
> > instead nginx just delays the call. I conclude that this is by design.
>
> Yes, the code counts average request rate, and if it sees two
> requests with just 1ms between them the averate rate will be 1000
> requests per second. This is more than what is allowed, and hence
> nginx will either delay the second request (unless configured with
> "nodelay"), or will reject it if the configured burst size is
> reached.
>
> > The issue, however, is that a client using multiple connections naturally
> > often wont be able to time the calls between each connection. And even
> > though our burst has been set to 20, our log is spawned by
> warning-messages
> > which I do not think should be a warning at all. There is a difference
> > between sending 2 calls at the same time and sending a total of 201
> > requests within a second, the latter being the only case I would expect
> to
> > be logged as a warning.
>
> If you are not happy with log levels used, you can easily tune
> them using the limit_req_log_level directive. See
> http://nginx.org/r/limit_req_log_level for details.
>
> Note well that given the use case description, you probably don't
> need requests to be delayed at all, so consider using "limit_req
> .. nodelay;". It will avoid delaying logic altogether, thus
> allowing as many requests as burst permits.
>
> > Instead of calculating the throttling by simply looking at the last call
> > time and calculate a minimum timespan between last call and current
> call, I
> > would like the logic to be that nginx keeps a counter of the number of
> > requests withing the current second, and when the second expires and a
> new
> > seconds exists, the counter Is reset.
>
> This approach is not scalable. For example, it won't allow to
> configure a limit of 1 request per minute. Moreover, it can
> easily allow more requests in a single second than configured -
> for example, a client can do 200 requests at 0.999 and additional
> 200 requests at 1.000. According to your algorithm, this is
> allowed, yet it 400 requests in just 2 milliseconds.
>
> The current implementation is much more robust, and it can be
> configured for various use cases. In particular, if you want to
> maintain limit of 200 requests per second and want to tolerate
> cases when a client does all requests allowed within a second at
> the same time, consider:
>
> limit_req_zone $binary_remote_addr zone=one:10m rate=200r/s;
> limit_req zone=one burst=200 nodelay;
>
> This will switch off delays as already suggested above, and will
> allow burst of up to 200 requests - that is, a client is allowed
> to do all 200 requests when a second starts. (If you really want
> to allow the case with 400 requests in 2 milliseconds as described
> above, consider using burst=400.)
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Kubernetes ingress

$
0
0
Патч не помог, проверял со свежей версией nginx. Больше 10-ти минут воркеры
находятся в nginx: worker process is shutting down

# nginx -V
nginx version: nginx/1.13.6
built by gcc 6.3.0 20170516 (Debian 6.3.0-18)
built with OpenSSL 1.1.0f 25 May 2017
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx
--with-compat --with-file-aio --with-threads --with-http_addition_module
--with-http_auth_request_module --with-http_dav_module
--with-http_flv_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_mp4_module
--with-http_random_index_module --with-http_realip_module
--with-http_secure_link_module --with-http_slice_module
--with-http_ssl_module --with-http_stub_status_module
--with-http_sub_module --with-http_v2_module --with-mail
--with-mail_ssl_module --with-stream --with-stream_realip_module
--with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g
-O2 -fdebug-prefix-map=/tmp/tmp.kzg1MIPOeG/nginx-1.13.6/nginx-1.13.6=.
-specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong
-Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC'
--with-ld-opt='-specs=/usr/share/dpkg/no-pie-link.specs -Wl,-z,relro
-Wl,-z,now -Wl,--as-needed -pie'

17 ноября 2017 г., 17:24 пользователь Sargas <sargaskn@gmail.com> написал:

> Благодарю!
>
> Проверю и отпишу по результатам.
>
> 17 ноября 2017 г., 17:05 пользователь Maxim Dounin <mdounin@mdounin.ru>
> написал:
>
> Hello!
>>
>> On Fri, Nov 17, 2017 at 02:17:47PM +0200, Sargas wrote:
>>
>> > > по strace видно что процесс обслуживает соединения
>> > > https://pastebin.com/N0Y4AANj
>> > > И вообщем-то процессы завершаются через какое-то время. Но время не
>> > > прогнозируемое. И в случае DEV окружения релоады nginx могут каждых 10
>> > > минут происходить.
>> > >
>> > > Хотелось бы понимать что еще покрутить можно.
>> > >
>> > > 8 ноября 2017 г., 14:45 пользователь Sargas <sargaskn@gmail.com>
>> написал:
>> > >
>> > > Приветствую!
>> > >>
>> > >> Использую ingress https://github.com/nginxinc/kubernetes-ingress ,
>> > >> возникла проблема с websocket'ами. После релоада nginx остаются
>> висеть
>> > >> воркеры
>> > >> nginx 762 0.0 0.0 89284 11292 ? S Nov07 0:15
>> nginx:
>> > >> worker process is shutting down
>> > >> nginx 26321 0.0 0.0 88008 10196 ? S Nov07 0:18
>> nginx:
>> > >> worker process is shutting down
>> > >>
>> > >> Разработчики добавили в сервис с nodejs отправку websocket
>> ping-фреймов
>> > >> для проверки работоспособности соединения, но воркеры всё равно могут
>> > >> висеть от нескольких часов до суток.
>> > >> Я добавил в конфиг worker_shutdown_timeout 1m;
>> > >> http://nginx.org/ru/docs/ngx_core_module.html#worker_shutdow
>> n_timeout
>> > >> Я ожидал что через минуту все воркеры завершатся, но этого не
>> происходит.
>>
>> Стоит посмотреть на патч тут, должно помочь:
>>
>> http://mailman.nginx.org/pipermail/nginx/2017-November/055130.html
>>
>> --
>> Maxim Dounin
>> http://mdounin.ru/
>> _______________________________________________
>> nginx-ru mailing list
>> nginx-ru@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-ru
>>
>
>
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Re[2]: Непонятки с ответом 400

$
0
0
Hello!

On Mon, Nov 20, 2017 at 04:55:17PM +0300, Slawa Olhovchenkov wrote:

> On Mon, Nov 20, 2017 at 04:43:05PM +0300, CoDDoC wrote:
>
> > Ладно, с этим разберусь.
> > Еще толику Вашего времени... Не совсем в тему, но почти. О выборе секции server для обработки запроса.
> >
> > Я слегка запутался, что от чего зависит: $host от $server_name или наоборот?
> > Вот как я это понимаю.
> >
> > 1. Сначала неправильный запрос:
> > echo -e 'HEAD http://www.other-domain.com/some-path HTTP/1.1\n''host:www.my-domain.com\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
> > Как все происходит (ИМХО):
> > 1.1. Получаем значение $host из строки запроса: $host = www.other-domain.com
> > На заголовок ($http_host = www.my-domain.com) в данном случае не смотрим.
>
> так может делать только прокся (причем прямая, а не реверсивная), для www-сервера это некорректный
> запрос. отвечать 500 или 400, секция нафиг.

Не совсем так. Цитата из RFC 2616,
https://tools.ietf.org/html/rfc2616#section-5.1.2:

To allow for transition to absoluteURIs in all requests in future
versions of HTTP, all HTTP/1.1 servers MUST accept the absoluteURI
form in requests, even though HTTP/1.1 clients will only generate
them in requests to proxies.

Тот же текст в RFC 7230 - в секции 5.3.2.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

[nginx] Fixed worker_shutdown_timeout in various cases.

$
0
0
details: http://hg.nginx.org/nginx/rev/9c29644f6d03
branches:
changeset: 7156:9c29644f6d03
user: Maxim Dounin <mdounin@mdounin.ru>
date: Mon Nov 20 16:31:07 2017 +0300
description:
Fixed worker_shutdown_timeout in various cases.

The ngx_http_upstream_process_upgraded() did not handle c->close request,
and upgraded connections do not use the write filter. As a result,
worker_shutdown_timeout did not affect upgraded connections (ticket #1419).
Fix is to handle c->close in the ngx_http_request_handler() function, thus
covering most of the possible cases in http handling.

Additionally, mail proxying did not handle neither c->close nor c->error,
and thus worker_shutdown_timeout did not work for mail connections. Fix is
to add c->close handling to ngx_mail_proxy_handler().

Also, added explicit handling of c->close to stream proxy,
ngx_stream_proxy_process_connection(). This improves worker_shutdown_timeout
handling in stream, it will no longer wait for some data being transferred
in a connection before closing it, and will also provide appropriate
logging at the "info" level.

diffstat:

src/http/ngx_http_request.c | 7 +++++++
src/mail/ngx_mail_proxy_module.c | 7 +++++--
src/stream/ngx_stream_proxy_module.c | 6 ++++++
3 files changed, 18 insertions(+), 2 deletions(-)

diffs (52 lines):

diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -2225,6 +2225,13 @@ ngx_http_request_handler(ngx_event_t *ev
ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http run request: \"%V?%V\"", &r->uri, &r->args);

+ if (c->close) {
+ r->main->count++;
+ ngx_http_terminate_request(r, 0);
+ ngx_http_run_posted_requests(c);
+ return;
+ }
+
if (ev->delayed && ev->timedout) {
ev->delayed = 0;
ev->timedout = 0;
diff --git a/src/mail/ngx_mail_proxy_module.c b/src/mail/ngx_mail_proxy_module.c
--- a/src/mail/ngx_mail_proxy_module.c
+++ b/src/mail/ngx_mail_proxy_module.c
@@ -882,10 +882,13 @@ ngx_mail_proxy_handler(ngx_event_t *ev)
c = ev->data;
s = c->data;

- if (ev->timedout) {
+ if (ev->timedout || c->close) {
c->log->action = "proxying";

- if (c == s->connection) {
+ if (c->close) {
+ ngx_log_error(NGX_LOG_INFO, c->log, 0, "shutdown timeout");
+
+ } else if (c == s->connection) {
ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT,
"client timed out");
c->timedout = 1;
diff --git a/src/stream/ngx_stream_proxy_module.c b/src/stream/ngx_stream_proxy_module.c
--- a/src/stream/ngx_stream_proxy_module.c
+++ b/src/stream/ngx_stream_proxy_module.c
@@ -1290,6 +1290,12 @@ ngx_stream_proxy_process_connection(ngx_
s = c->data;
u = s->upstream;

+ if (c->close) {
+ ngx_log_error(NGX_LOG_INFO, c->log, 0, "shutdown timeout");
+ ngx_stream_proxy_finalize(s, NGX_STREAM_OK);
+ return;
+ }
+
c = s->connection;
pc = u->peer.connection;

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re[2]: Непонятки с ответом 400

$
0
0
Вот в той документации-то я как-раз и запутался.
Спасибо. Вопросов нет.


>Понедельник, 20 ноября 2017, 17:24 +03:00 от Maxim Dounin <mdounin@mdounin.ru>:
>
>Hello!
>
>On Mon, Nov 20, 2017 at 04:43:05PM +0300, CoDDoC wrote:
>
>> Ладно, с этим разберусь.
>> Еще толику Вашего времени... Не совсем в тему, но почти. О выборе секции server для обработки запроса.
>>
>> Я слегка запутался, что от чего зависит: $host от $server_name или наоборот?
>> Вот как я это понимаю.
>>
>> 1. Сначала неправильный запрос:
>> echo -e 'HEAD http://www.other-domain.com/some-path HTTP/1.1\n''host:www.my-domain.com\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
>> Как все происходит (ИМХО):
>> 1.1. Получаем значение $host из строки запроса: $host = www.other-domain.com
>> На заголовок ($http_host = www.my-domain.com ) в данном случае не смотрим.
>> 1.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
>> 1.3. Такой секции не существует, запрос передается в дефолтовую, и получаем $server_name = _
>>
>> ----------------------------------------------------
>> 2. Теперь правильный запрос:
>> echo -e 'HEAD / HTTP/1.1\n''host:www.my-domain.com\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
>> 2.1. В строке запроса хоста нет, берем из заголовка ($http_host = www.my-domain.com ).
>> Получаем значение $host из $http_host: $host= www.my-domain.com
>> 2.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
>> 2.3. Передаем в нее запрос и получаем $server_name = www.my-domain.com
>>
>> ----------------------------------------------------
>> 3. Опять неправильный запрос с пустым $http_host:
>> echo -e 'HEAD / HTTP/1.1\n''host:\n''user-agent:NCAT-TEST\n'| ncat www.my-domain.com 80
>> 3.1. Значения $host = '' и $http_host = ''
>> 3.2. Ищем секцию, соответствующую значению $host для заданного порта (80)
>> 3.3. Такой секции не существует, запрос передается в дефолтовую, и получаем $server_name = _
>> 3.4. $host получает значение $server_name, т.е. $host = _
>> Т.е., в отличие от примера 2, не $server_name получаем из $host, а $host из $server_name
>>
>> Я верно понимаю алгоритм?
>
>Да, как-то так. Если в строке запроса используется полный адрес,
>то $host берётся оттуда. Иначе - из заголовка Host. Если
>заголовок Host отсутствует или пустой - будет использовано имя
>сервера, которое также доступно в переменной $server_name.
>
>Документация тут:
>
>http://nginx.org/ru/docs/http/ngx_http_core_module.html#var_host
>http://nginx.org/ru/docs/http/ngx_http_core_module.html#server_name
>
>--
>Maxim Dounin
>http://mdounin.ru/
>_______________________________________________
>nginx-ru mailing list
>nginx-ru@nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx-ru


--
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Issue with flooded warning and request limiting

$
0
0
FWIW - I have found rate limiting very useful (with hardware LB as well as nginx) but, because of the inherent burstiness of web traffic, I typically set my threshold to 10x or 20x my expected “reasonable peak rate.”
The rationale is that this is a very crude tool, just one of many that need to work together to protect the backend from both reasonable variations in workload and malicious use.

When combined with smart use of browser cache, CDN, microcaching in nginx, canonical names, smart cache key design you can get an inexpensive nginx server to offer similar functionality to a $50k F5 BigIP LTM+WAF at less than 1/10 the cost.

But all of these features need to be used delicately, if you want to avoid rejecting valid requests.

Peter

Sent from my iPhone

> On Nov 20, 2017, at 9:28 AM, Stephan Ryer <sr@inmobile.dk> wrote:
>
> Thank you very much for clearing this out. All I need to do is "limit_req_log_level warn;" and then I see limits as warn-logs and delaying as info, and hence I only view warn+ levels, it is omitted from the logfile completely.
>
> ---
>
> Med venlig hilsen / Best Regards
> Stephan Ryer Møller
> Partner & CTO
>
> inMobile ApS
> Axel Kiers Vej 18L
> DK-8270 Højbjerg
>
> Dir. +45 82 82 66 92
> E-mail: sr@inmobile.dk
>
> Web: www.inmobile.dk
> Tel: +45 88 33 66 99
>
> 2017-11-20 14:01 GMT+01:00 Maxim Dounin <mdounin@mdounin.ru>:
>> Hello!
>>
>> On Mon, Nov 20, 2017 at 11:33:26AM +0100, Stephan Ryer wrote:
>>
>> > We are using nginx as a proxy server in front of our IIS servers.
>> >
>> > We have a client who needs to call us up to 200 times per second. Due to
>> > the roundtrip-time, 16 simultanious connections are opened from the client
>> > and each connection is used independently to send a https request, wait for
>> > x ms and then send again.
>> >
>> > I have been doing some tests and looked into the throttle logic in the
>> > nginx-code. It seems that when setting request limit to 200/sec it is
>> > actually interpreted as “minimum 5ms per call” in the code. If we receive 2
>> > calls at the same time, the warning log will show an “excess”-message and
>> > the call will be delayed to ensure a minimum of 5ms between the calls..
>> > (and if no burst is set, it will be an error message in the log and an
>> > error will be returned to the client)
>> >
>> > We have set burst to 20 meaning, that when our client only sends 1 request
>> > at a time per connection, he will never get an error reply from nginx,
>> > instead nginx just delays the call. I conclude that this is by design.
>>
>> Yes, the code counts average request rate, and if it sees two
>> requests with just 1ms between them the averate rate will be 1000
>> requests per second. This is more than what is allowed, and hence
>> nginx will either delay the second request (unless configured with
>> "nodelay"), or will reject it if the configured burst size is
>> reached.
>>
>> > The issue, however, is that a client using multiple connections naturally
>> > often wont be able to time the calls between each connection. And even
>> > though our burst has been set to 20, our log is spawned by warning-messages
>> > which I do not think should be a warning at all. There is a difference
>> > between sending 2 calls at the same time and sending a total of 201
>> > requests within a second, the latter being the only case I would expect to
>> > be logged as a warning.
>>
>> If you are not happy with log levels used, you can easily tune
>> them using the limit_req_log_level directive. See
>> http://nginx.org/r/limit_req_log_level for details.
>>
>> Note well that given the use case description, you probably don't
>> need requests to be delayed at all, so consider using "limit_req
>> .. nodelay;". It will avoid delaying logic altogether, thus
>> allowing as many requests as burst permits.
>>
>> > Instead of calculating the throttling by simply looking at the last call
>> > time and calculate a minimum timespan between last call and current call, I
>> > would like the logic to be that nginx keeps a counter of the number of
>> > requests withing the current second, and when the second expires and a new
>> > seconds exists, the counter Is reset.
>>
>> This approach is not scalable. For example, it won't allow to
>> configure a limit of 1 request per minute. Moreover, it can
>> easily allow more requests in a single second than configured -
>> for example, a client can do 200 requests at 0.999 and additional
>> 200 requests at 1.000. According to your algorithm, this is
>> allowed, yet it 400 requests in just 2 milliseconds.
>>
>> The current implementation is much more robust, and it can be
>> configured for various use cases. In particular, if you want to
>> maintain limit of 200 requests per second and want to tolerate
>> cases when a client does all requests allowed within a second at
>> the same time, consider:
>>
>> limit_req_zone $binary_remote_addr zone=one:10m rate=200r/s;
>> limit_req zone=one burst=200 nodelay;
>>
>> This will switch off delays as already suggested above, and will
>> allow burst of up to 200 requests - that is, a client is allowed
>> to do all 200 requests when a second starts. (If you really want
>> to allow the case with 400 requests in 2 milliseconds as described
>> above, consider using burst=400.)
>>
>> --
>> Maxim Dounin
>> http://mdounin.ru/
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[njs] Fixing Coverity warnings related to close().

$
0
0
details: http://hg.nginx.org/njs/rev/e51a848edba3
branches:
changeset: 429:e51a848edba3
user: Dmitry Volyntsev <xeioex@nginx.com>
date: Mon Nov 20 19:24:56 2017 +0300
description:
Fixing Coverity warnings related to close().

Coverity assumes that open() can normally return 0.

diffstat:

njs/njs_fs.c | 24 ++++++++++++------------
1 files changed, 12 insertions(+), 12 deletions(-)

diffs (69 lines):

diff -r 7ada5170b7bb -r e51a848edba3 njs/njs_fs.c
--- a/njs/njs_fs.c Mon Nov 20 19:24:56 2017 +0300
+++ b/njs/njs_fs.c Mon Nov 20 19:24:56 2017 +0300
@@ -277,8 +277,8 @@ njs_fs_read_file(njs_vm_t *vm, njs_value

done:

- if (fd > 0) {
- close(fd);
+ if (fd != -1) {
+ (void) close(fd);
}

if (description != 0) {
@@ -305,8 +305,8 @@ done:

memory_error:

- if (fd > 0) {
- close(fd);
+ if (fd != -1) {
+ (void) close(fd);
}

njs_exception_memory_error(vm);
@@ -476,8 +476,8 @@ njs_fs_read_file_sync(njs_vm_t *vm, njs_

done:

- if (fd > 0) {
- close(fd);
+ if (fd != -1) {
+ (void) close(fd);
}

if (description != 0) {
@@ -491,8 +491,8 @@ done:

memory_error:

- if (fd > 0) {
- close(fd);
+ if (fd != -1) {
+ (void) close(fd);
}

njs_exception_memory_error(vm);
@@ -696,8 +696,8 @@ static njs_ret_t njs_fs_write_file_inter

done:

- if (fd > 0) {
- close(fd);
+ if (fd != -1) {
+ (void) close(fd);
}

if (description != 0) {
@@ -868,8 +868,8 @@ njs_fs_write_file_sync_internal(njs_vm_t

done:

- if (fd > 0) {
- close(fd);
+ if (fd != -1) {
+ (void) close(fd);
}

if (description != 0) {
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[njs] Fixed a typo in njs interactive test.

$
0
0
details: http://hg.nginx.org/njs/rev/7ada5170b7bb
branches:
changeset: 428:7ada5170b7bb
user: Dmitry Volyntsev <xeioex@nginx.com>
date: Mon Nov 20 19:24:56 2017 +0300
description:
Fixed a typo in njs interactive test.

diffstat:

njs/test/njs_interactive_test.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r 4fc65a23bcfc -r 7ada5170b7bb njs/test/njs_interactive_test.c
--- a/njs/test/njs_interactive_test.c Mon Nov 20 19:24:55 2017 +0300
+++ b/njs/test/njs_interactive_test.c Mon Nov 20 19:24:56 2017 +0300
@@ -188,7 +188,7 @@ static njs_interactive_test_t njs_test[
{ nxt_string("var o = { toString: function() { return [1] } }" ENTER
"o" ENTER),
nxt_string("TypeError\n"
- "at main\n") },
+ " at main (native)\n") },

};

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[njs] MemoryError reimplemented without its own prototype.

$
0
0
details: http://hg.nginx.org/njs/rev/5f619bcb0e7d
branches:
changeset: 430:5f619bcb0e7d
user: Dmitry Volyntsev <xeioex@nginx.com>
date: Mon Nov 20 19:24:58 2017 +0300
description:
MemoryError reimplemented without its own prototype.

MemoryError is a special preallocated immutable object. Its value type
is NJS_OBJECT_INTERNAL_ERROR. Initially the object had its own prototype
object. It introduced inconsistency between value types and prototype
types, because some routines (for example, njs_object_prototype_to_string())
expect them to be pairwise aligned.

diffstat:

njs/njs_builtin.c | 1 -
njs/njs_error.c | 103 ++++++++++++++++++++++++----------------------
njs/njs_error.h | 1 -
njs/njs_vm.h | 9 +--
njs/test/njs_unit_test.c | 5 +-
5 files changed, 58 insertions(+), 61 deletions(-)

diffs (222 lines):

diff -r e51a848edba3 -r 5f619bcb0e7d njs/njs_builtin.c
--- a/njs/njs_builtin.c Mon Nov 20 19:24:56 2017 +0300
+++ b/njs/njs_builtin.c Mon Nov 20 19:24:58 2017 +0300
@@ -78,7 +78,6 @@ const njs_object_init_t *njs_prototype_
&njs_syntax_error_prototype_init,
&njs_type_error_prototype_init,
&njs_uri_error_prototype_init,
- &njs_memory_error_prototype_init,
};


diff -r e51a848edba3 -r 5f619bcb0e7d njs/njs_error.c
--- a/njs/njs_error.c Mon Nov 20 19:24:56 2017 +0300
+++ b/njs/njs_error.c Mon Nov 20 19:24:58 2017 +0300
@@ -498,7 +498,7 @@ njs_set_memory_error(njs_vm_t *vm)

nxt_lvlhsh_init(&object->hash);
nxt_lvlhsh_init(&object->shared_hash);
- object->__proto__ = &prototypes[NJS_PROTOTYPE_MEMORY_ERROR].object;
+ object->__proto__ = &prototypes[NJS_PROTOTYPE_INTERNAL_ERROR].object;
object->type = NJS_OBJECT_INTERNAL_ERROR;
object->shared = 1;

@@ -532,6 +532,30 @@ njs_memory_error_constructor(njs_vm_t *v
}


+static njs_ret_t
+njs_memory_error_prototype_create(njs_vm_t *vm, njs_value_t *value)
+{
+ int32_t index;
+ njs_value_t *proto;
+ njs_function_t *function;
+
+ /* MemoryError has no its own prototype. */
+
+ index = NJS_PROTOTYPE_INTERNAL_ERROR;
+
+ function = value->data.u.function;
+ proto = njs_property_prototype_create(vm, &function->object.hash,
+ &vm->prototypes[index].object);
+ if (proto == NULL) {
+ proto = (njs_value_t *) &njs_value_void;
+ }
+
+ vm->retval = *proto;
+
+ return NXT_OK;
+}
+
+
static const njs_object_prop_t njs_memory_error_constructor_properties[] =
{
/* MemoryError.name == "MemoryError". */
@@ -552,7 +576,7 @@ static const njs_object_prop_t njs_memo
{
.type = NJS_NATIVE_GETTER,
.name = njs_string("prototype"),
- .value = njs_native_getter(njs_object_prototype_create),
+ .value = njs_native_getter(njs_memory_error_prototype_create),
},
};

@@ -701,6 +725,26 @@ const njs_object_init_t njs_eval_error_
};


+static njs_ret_t
+njs_internal_error_prototype_to_string(njs_vm_t *vm, njs_value_t *args,
+ nxt_uint_t nargs, njs_index_t unused)
+{
+ if (nargs >= 1 && njs_is_object(&args[0])) {
+
+ /* MemoryError is a nonextensible internal error. */
+ if (!args[0].data.u.object->extensible) {
+ static const njs_value_t name = njs_string("MemoryError");
+
+ vm->retval = name;
+
+ return NJS_OK;
+ }
+ }
+
+ return njs_error_prototype_to_string(vm, args, nargs, unused);
+}
+
+
static const njs_object_prop_t njs_internal_error_prototype_properties[] =
{
{
@@ -708,6 +752,13 @@ static const njs_object_prop_t njs_inte
.name = njs_string("name"),
.value = njs_string("InternalError"),
},
+
+ {
+ .type = NJS_METHOD,
+ .name = njs_string("toString"),
+ .value = njs_native_function(njs_internal_error_prototype_to_string,
+ 0, 0),
+ },
};


@@ -801,51 +852,3 @@ const njs_object_init_t njs_uri_error_p
njs_uri_error_prototype_properties,
nxt_nitems(njs_uri_error_prototype_properties),
};
-
-
-static njs_ret_t
-njs_memory_error_prototype_to_string(njs_vm_t *vm, njs_value_t *args,
- nxt_uint_t nargs, njs_index_t unused)
-{
- static const njs_value_t name = njs_string("MemoryError");
-
- vm->retval = name;
-
- return NJS_OK;
-}
-
-
-static const njs_object_prop_t njs_memory_error_prototype_properties[] =
-{
- {
- .type = NJS_PROPERTY,
- .name = njs_string("name"),
- .value = njs_string("MemoryError"),
- },
-
- {
- .type = NJS_PROPERTY,
- .name = njs_string("message"),
- .value = njs_string(""),
- },
-
- {
- .type = NJS_METHOD,
- .name = njs_string("valueOf"),
- .value = njs_native_function(njs_error_prototype_value_of, 0, 0),
- },
-
- {
- .type = NJS_METHOD,
- .name = njs_string("toString"),
- .value = njs_native_function(njs_memory_error_prototype_to_string,
- 0, 0),
- },
-};
-
-
-const njs_object_init_t njs_memory_error_prototype_init = {
- nxt_string("MemoryError"),
- njs_memory_error_prototype_properties,
- nxt_nitems(njs_memory_error_prototype_properties),
-};
diff -r e51a848edba3 -r 5f619bcb0e7d njs/njs_error.h
--- a/njs/njs_error.h Mon Nov 20 19:24:56 2017 +0300
+++ b/njs/njs_error.h Mon Nov 20 19:24:58 2017 +0300
@@ -71,7 +71,6 @@ extern const njs_object_init_t njs_ref_
extern const njs_object_init_t njs_syntax_error_prototype_init;
extern const njs_object_init_t njs_type_error_prototype_init;
extern const njs_object_init_t njs_uri_error_prototype_init;
-extern const njs_object_init_t njs_memory_error_prototype_init;


#endif /* _NJS_BOOLEAN_H_INCLUDED_ */
diff -r e51a848edba3 -r 5f619bcb0e7d njs/njs_vm.h
--- a/njs/njs_vm.h Mon Nov 20 19:24:56 2017 +0300
+++ b/njs/njs_vm.h Mon Nov 20 19:24:58 2017 +0300
@@ -799,8 +799,7 @@ enum njs_prototypes_e {
NJS_PROTOTYPE_SYNTAX_ERROR,
NJS_PROTOTYPE_TYPE_ERROR,
NJS_PROTOTYPE_URI_ERROR,
- NJS_PROTOTYPE_MEMORY_ERROR,
-#define NJS_PROTOTYPE_MAX (NJS_PROTOTYPE_MEMORY_ERROR + 1)
+#define NJS_PROTOTYPE_MAX (NJS_PROTOTYPE_URI_ERROR + 1)
};


@@ -833,7 +832,8 @@ enum njs_constructor_e {
NJS_CONSTRUCTOR_SYNTAX_ERROR = NJS_PROTOTYPE_SYNTAX_ERROR,
NJS_CONSTRUCTOR_TYPE_ERROR = NJS_PROTOTYPE_TYPE_ERROR,
NJS_CONSTRUCTOR_URI_ERROR = NJS_PROTOTYPE_URI_ERROR,
- NJS_CONSTRUCTOR_MEMORY_ERROR = NJS_PROTOTYPE_MEMORY_ERROR,
+ /* MemoryError has no its own prototype. */
+ NJS_CONSTRUCTOR_MEMORY_ERROR,
#define NJS_CONSTRUCTOR_MAX (NJS_CONSTRUCTOR_MEMORY_ERROR + 1)
};

@@ -975,8 +975,7 @@ struct njs_vm_s {

/*
* MemoryError is statically allocated immutable Error object
- * with the generic type NJS_OBJECT_INTERNAL_ERROR but its own prototype
- * object NJS_PROTOTYPE_MEMORY_ERROR.
+ * with the generic type NJS_OBJECT_INTERNAL_ERROR.
*/
njs_object_t memory_error_object;

diff -r e51a848edba3 -r 5f619bcb0e7d njs/test/njs_unit_test.c
--- a/njs/test/njs_unit_test.c Mon Nov 20 19:24:56 2017 +0300
+++ b/njs/test/njs_unit_test.c Mon Nov 20 19:24:58 2017 +0300
@@ -5291,9 +5291,6 @@ static njs_unit_test_t njs_test[] =
{ nxt_string("URIError('e').name + ': ' + URIError('e').message"),
nxt_string("URIError: e") },

- { nxt_string("MemoryError('e').name + ': ' + MemoryError('e').message"),
- nxt_string("MemoryError: ") },
-
{ nxt_string("var e = EvalError('e'); e.name = 'E'; e"),
nxt_string("E: e") },

@@ -5342,7 +5339,7 @@ static njs_unit_test_t njs_test[] =
nxt_string("URIError") },

{ nxt_string("MemoryError.prototype.name"),
- nxt_string("MemoryError") },
+ nxt_string("InternalError") },

{ nxt_string("EvalError.prototype.message"),
nxt_string("") },
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[njs] Fixed expect file tests.

$
0
0
details: http://hg.nginx.org/njs/rev/4fc65a23bcfc
branches:
changeset: 427:4fc65a23bcfc
user: Dmitry Volyntsev <xeioex@nginx.com>
date: Mon Nov 20 19:24:55 2017 +0300
description:
Fixed expect file tests.

Using current directory for temporary files because /tmp
is not available for writing in BB environment.

diffstat:

njs/test/njs_expect_test.exp | 96 ++++++++++++++++++++++---------------------
1 files changed, 49 insertions(+), 47 deletions(-)

diffs (262 lines):

diff -r 5c6aa60224cb -r 4fc65a23bcfc njs/test/njs_expect_test.exp
--- a/njs/test/njs_expect_test.exp Fri Nov 17 18:55:07 2017 +0300
+++ b/njs/test/njs_expect_test.exp Mon Nov 20 19:24:55 2017 +0300
@@ -185,11 +185,11 @@ njs_test {

# require('fs')

-set file [open /tmp/njs_test_file w]
+set file [open njs_test_file w]
puts -nonewline $file "αβZγ"
flush $file

-exec /bin/echo -ne {\x80\x80} > /tmp/njs_test_file_non_utf8
+exec /bin/echo -ne {\x80\x80} > njs_test_file_non_utf8

njs_test {
{"var fs = require('fs')\r\n"
@@ -203,35 +203,37 @@ njs_test {
njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.readFile('/tmp/njs_test_file', 'utf8', function (e, data) {console.log(data[2]+data.length)})\r\n"
+ {"fs.readFile('njs_test_file', 'utf8', function (e, data) {console.log(data[2]+data.length)})\r\n"
"Z4\r\nundefined\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.readFile('/tmp/njs_test_file', function (e, data) {console.log(data[4]+data.length)})\r\n"
+ {"fs.readFile('njs_test_file', function (e, data) {console.log(data[4]+data.length)})\r\n"
"Z7\r\nundefined\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.readFile('/tmp/njs_test_file', {encoding:'utf8',flag:'r+'}, function (e, data) {console.log(data)})\r\n"
+ {"fs.readFile('njs_test_file', {encoding:'utf8',flag:'r+'}, function (e, data) {console.log(data)})\r\n"
"αβZγ\r\nundefined\r\n>> "}
}

+exec rm -fr njs_unknown_path
+
+njs_test {
+ {"var fs = require('fs'); \r\n"
+ "undefined\r\n>> "}
+ {"fs.readFile('njs_unknown_path', 'utf8', function (e) {console.log(JSON.stringify(e))})\r\n"
+ "{\"errno\":2,\"path\":\"njs_unknown_path\",\"syscall\":\"open\"}\r\nundefined\r\n>> "}
+}
+
njs_test {
{"var fs = require('fs'); \r\n"
"undefined\r\n>> "}
- {"fs.readFile('/tmp/njs_unknown_path', 'utf8', function (e) {console.log(JSON.stringify(e))})\r\n"
- "{\"errno\":2,\"path\":\"/tmp/njs_unknown_path\",\"syscall\":\"open\"}\r\nundefined\r\n>> "}
-}
-
-njs_test {
- {"var fs = require('fs'); \r\n"
- "undefined\r\n>> "}
- {"fs.readFile('/tmp/njs_unknown_path', {encoding:'utf8', flag:'r+'}, function (e) {console.log(e)})\r\n"
+ {"fs.readFile('njs_unknown_path', {encoding:'utf8', flag:'r+'}, function (e) {console.log(e)})\r\n"
"Error: No such file or directory\r\nundefined\r\n>> "}
}

@@ -240,79 +242,79 @@ njs_test {
njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file', 'utf8')[2]\r\n"
+ {"fs.readFileSync('njs_test_file', 'utf8')[2]\r\n"
"Z\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file')[4]\r\n"
+ {"fs.readFileSync('njs_test_file')[4]\r\n"
"Z\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file', {encoding:'utf8',flag:'r+'})\r\n"
+ {"fs.readFileSync('njs_test_file', {encoding:'utf8',flag:'r+'})\r\n"
"αβZγ\r\n>> "}
}

njs_test {
{"var fs = require('fs'); \r\n"
"undefined\r\n>> "}
- {"try { fs.readFileSync('/tmp/njs_unknown_path')} catch (e) {console.log(JSON.stringify(e))}\r\n"
- "{\"errno\":2,\"path\":\"/tmp/njs_unknown_path\",\"syscall\":\"open\"}\r\nundefined\r\n>> "}
+ {"try { fs.readFileSync('njs_unknown_path')} catch (e) {console.log(JSON.stringify(e))}\r\n"
+ "{\"errno\":2,\"path\":\"njs_unknown_path\",\"syscall\":\"open\"}\r\nundefined\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file_non_utf8').charCodeAt(1)\r\n"
+ {"fs.readFileSync('njs_test_file_non_utf8').charCodeAt(1)\r\n"
"128"}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file_non_utf8', 'utf8')\r\n"
+ {"fs.readFileSync('njs_test_file_non_utf8', 'utf8')\r\n"
"Error: Non-UTF8 file, convertion is not implemented"}
}


# require('fs').writeFile()

-exec rm -fr /tmp/njs_test_file2
+exec rm -fr njs_test_file2

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"function h1(e) {if (e) {throw e}; console.log(fs.readFileSync('/tmp/njs_test_file2'))}\r\n"
+ {"function h1(e) {if (e) {throw e}; console.log(fs.readFileSync('njs_test_file2'))}\r\n"
"undefined\r\n>> "}
- {"fs.writeFile('/tmp/njs_test_file2', 'ABC', h1)\r\n"
+ {"fs.writeFile('njs_test_file2', 'ABC', h1)\r\n"
"ABC\r\nundefined\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.writeFile('/tmp/njs_test_file2', 'ABC', 'utf8', function (e) { if (e) {throw e}; console.log(fs.readFileSync('/tmp/njs_test_file2'))})\r\n"
+ {"fs.writeFile('njs_test_file2', 'ABC', 'utf8', function (e) { if (e) {throw e}; console.log(fs.readFileSync('njs_test_file2'))})\r\n"
"ABC\r\nundefined\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.writeFile('/tmp/njs_test_file2', 'ABC', {encoding:'utf8', mode:0o666}, function (e) { if (e) {throw e}; console.log(fs.readFileSync('/tmp/njs_test_file2'))})\r\n"
+ {"fs.writeFile('njs_test_file2', 'ABC', {encoding:'utf8', mode:0o666}, function (e) { if (e) {throw e}; console.log(fs.readFileSync('njs_test_file2'))})\r\n"
"ABC\r\nundefined\r\n>> "}
}

-exec rm -fr /tmp/njs_wo_file
+exec rm -fr njs_wo_file

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.writeFile('/tmp/njs_wo_file', 'ABC', {mode:0o222}, function (e) {console.log(fs.readFileSync('/tmp/njs_wo_file'))})\r\n"
+ {"fs.writeFile('njs_wo_file', 'ABC', {mode:0o222}, function (e) {console.log(fs.readFileSync('njs_wo_file'))})\r\n"
"Error: Permission denied"}
}

@@ -325,81 +327,81 @@ njs_test {

# require('fs').writeFileSync()

-exec rm -fr /tmp/njs_test_file2
+exec rm -fr njs_test_file2

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC')\r\n"
+ {"fs.writeFileSync('njs_test_file2', 'ABC')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file2')\r\n"
+ {"fs.readFileSync('njs_test_file2')\r\n"
"ABC\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC', 'utf8')\r\n"
+ {"fs.writeFileSync('njs_test_file2', 'ABC', 'utf8')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file2')\r\n"
+ {"fs.readFileSync('njs_test_file2')\r\n"
"ABC\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC')\r\n"
+ {"fs.writeFileSync('njs_test_file2', 'ABC')\r\n"
"undefined\r\n>> "}
- {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC')\r\n"
+ {"fs.writeFileSync('njs_test_file2', 'ABC')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file2')\r\n"
+ {"fs.readFileSync('njs_test_file2')\r\n"
"ABC\r\n>> "}
}

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.writeFileSync('/tmp/njs_test_file2', 'ABC', {encoding:'utf8', mode:0o666})\r\n"
+ {"fs.writeFileSync('njs_test_file2', 'ABC', {encoding:'utf8', mode:0o666})\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file2')\r\n"
+ {"fs.readFileSync('njs_test_file2')\r\n"
"ABC\r\n>> "}
}

-exec rm -fr /tmp/njs_wo_file
+exec rm -fr njs_wo_file

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.writeFileSync('/tmp/njs_wo_file', 'ABC', {mode:0o222}); fs.readFileSync('/tmp/njs_wo_file')\r\n"
+ {"fs.writeFileSync('njs_wo_file', 'ABC', {mode:0o222}); fs.readFileSync('njs_wo_file')\r\n"
"Error: Permission denied"}
}

# require('fs').appendFile()

-exec rm -fr /tmp/njs_test_file2
+exec rm -fr njs_test_file2

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"function h1(e) {console.log(fs.readFileSync('/tmp/njs_test_file2'))}\r\n"
+ {"function h1(e) {console.log(fs.readFileSync('njs_test_file2'))}\r\n"
"undefined\r\n>> "}
- {"function h2(e) {fs.appendFile('/tmp/njs_test_file2', 'ABC', h1)}\r\n"
+ {"function h2(e) {fs.appendFile('njs_test_file2', 'ABC', h1)}\r\n"
"undefined\r\n>> "}
- {"fs.appendFile('/tmp/njs_test_file2', 'ABC', h2)\r\n"
+ {"fs.appendFile('njs_test_file2', 'ABC', h2)\r\n"
"ABCABC\r\nundefined\r\n>> "}
}

# require('fs').appendFileSync()

-exec rm -fr /tmp/njs_test_file2
+exec rm -fr njs_test_file2

njs_test {
{"var fs = require('fs')\r\n"
"undefined\r\n>> "}
- {"fs.appendFileSync('/tmp/njs_test_file2', 'ABC')\r\n"
+ {"fs.appendFileSync('njs_test_file2', 'ABC')\r\n"
"undefined\r\n>> "}
- {"fs.appendFileSync('/tmp/njs_test_file2', 'ABC')\r\n"
+ {"fs.appendFileSync('njs_test_file2', 'ABC')\r\n"
"undefined\r\n>> "}
- {"fs.readFileSync('/tmp/njs_test_file2')\r\n"
+ {"fs.readFileSync('njs_test_file2')\r\n"
"ABCABC\r\n>> "}
}
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Kubernetes ingress

$
0
0
Hello!

On Mon, Nov 20, 2017 at 04:28:04PM +0200, Sargas wrote:

> Патч не помог, проверял со свежей версией nginx. Больше 10-ти минут воркеры
> находятся в nginx: worker process is shutting down
>
> # nginx -V
> nginx version: nginx/1.13.6

[...]

Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет,
нужно именно накладывать руками и пересобирать nginx. Если речь
именно про websocket'ы, то он должен был помочь.

Впрочем, в любом случае сейчас уже закоммичен чуть более лучший
патч, который заодно лечит аналогичную проблему в mail и улучшает
ситуацию в stream, тут:

http://hg.nginx.org/nginx/rev/9c29644f6d03

Релиз с ним (1.3.7) будет завтра.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

[njs] Added tag 0.1.15 for changeset 215ca47b9167

$
0
0
details: http://hg.nginx.org/njs/rev/5eb2620a9bec
branches:
changeset: 432:5eb2620a9bec
user: Dmitry Volyntsev <xeioex@nginx.com>
date: Mon Nov 20 20:08:56 2017 +0300
description:
Added tag 0.1.15 for changeset 215ca47b9167

diffstat:

.hgtags | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

diffs (8 lines):

diff -r 215ca47b9167 -r 5eb2620a9bec .hgtags
--- a/.hgtags Mon Nov 20 20:07:15 2017 +0300
+++ b/.hgtags Mon Nov 20 20:08:56 2017 +0300
@@ -13,3 +13,4 @@ fc5df33f4e6b02a673daf3728ff690fb1e09b95e
c07b060396be3622ca97b037a86076b61b850847 0.1.12
d548b78eb881ca799aa6fc8ba459d076f7db5ac8 0.1.13
d89d06dc638e78f8635c0bfbcd02469ac1a08748 0.1.14
+215ca47b9167d513fd58ac88de97659377e45275 0.1.15
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[njs] Version 0.1.15.

$
0
0
details: http://hg.nginx.org/njs/rev/215ca47b9167
branches:
changeset: 431:215ca47b9167
user: Dmitry Volyntsev <xeioex@nginx.com>
date: Mon Nov 20 20:07:15 2017 +0300
description:
Version 0.1.15.

diffstat:

CHANGES | 15 +++++++++++++++
Makefile | 2 +-
2 files changed, 16 insertions(+), 1 deletions(-)

diffs (32 lines):

diff -r 5f619bcb0e7d -r 215ca47b9167 CHANGES
--- a/CHANGES Mon Nov 20 19:24:58 2017 +0300
+++ b/CHANGES Mon Nov 20 20:07:15 2017 +0300
@@ -1,3 +1,18 @@
+
+Changes with nJScript 0.1.15 20 Nov 2017
+
+ *) Feature: Error, EvalError, InternalError, RangeError,
+ ReferenceError, SyntaxError, TypeError, URIError objects.
+
+ *) Feature: octal literals support.
+
+ *) Feature: File system access fs.readFile(), fs.readFileSync(),
+ fs.appendFile(), fs.appendFileSync(), fs.writeFile(),
+ fs.writeFileSync() methods.
+
+ *) Feature: nginx modules print backtrace on exception.
+
+ *) Bugfix: miscellaneous bugs have been fixed.

Changes with nJScript 0.1.14 09 Oct 2017

diff -r 5f619bcb0e7d -r 215ca47b9167 Makefile
--- a/Makefile Mon Nov 20 19:24:58 2017 +0300
+++ b/Makefile Mon Nov 20 20:07:15 2017 +0300
@@ -1,5 +1,5 @@

-NJS_VER = 0.1.14
+NJS_VER = 0.1.15

NXT_LIB = nxt

_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS

$
0
0
I am trying to use nginx as a reverse proxy with upstream SSL. For this, I am using the below directive in the nginx configuration file

proxy_pass https://<upstream_block_file_name>;

where "<upstream_block_file_name>" is another file which has the list of upstream servers.

upstream <upstream_block_file_name> {
server <IP_address_of_upstream_server>:<Port> weight=1;
keepalive 100;
}

With this configuration if I try to reload the Nginx configuration, it fails intermittently with the below error message

nginx: [emerg] host not found in upstream \"<upstream_block_file_name>\"

However, if I changed the protocol mentioned in the proxy_pass directive from https to http, then the reload goes through.

Could anyone please explain what mistake I might be doing here?

Thanks in advance.

Re: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS

$
0
0
Hi,

try

1) curl -ivvv https://<upstream ip_address> to your upstreams.
2) add server <ip_addr>:443 (if your upstreams accepting ssl connections on 443)



br,
Aziz.





> On 20 Nov 2017, at 20:46, shivramg94 <nginx-forum@forum.nginx.org> wrote:
>
> I am trying to use nginx as a reverse proxy with upstream SSL. For this, I
> am using the below directive in the nginx configuration file
>
> proxy_pass https://<upstream_block_file_name>;
>
> where "<upstream_block_file_name>" is another file which has the list of
> upstream servers.
>
> upstream <upstream_block_file_name> {
> server <IP_address_of_upstream_server>:<Port> weight=1;
> keepalive 100;
> }
>
> With this configuration if I try to reload the Nginx configuration, it fails
> intermittently with the below error message
>
> nginx: [emerg] host not found in upstream \"<upstream_block_file_name>\"
>
> However, if I changed the protocol mentioned in the proxy_pass directive
> from https to http, then the reload goes through.
>
> Could anyone please explain what mistake I might be doing here?
>
> Thanks in advance.
>
> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277415,277415#msg-277415
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Kubernetes ingress

$
0
0
Hello!

On Mon, Nov 20, 2017 at 08:03:22PM +0300, Maxim Dounin wrote:

> Hello!
>
> On Mon, Nov 20, 2017 at 04:28:04PM +0200, Sargas wrote:
>
> > Патч не помог, проверял со свежей версией nginx. Больше 10-ти минут воркеры
> > находятся в nginx: worker process is shutting down
> >
> > # nginx -V
> > nginx version: nginx/1.13.6
>
> [...]
>
> Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет,
> нужно именно накладывать руками и пересобирать nginx. Если речь
> именно про websocket'ы, то он должен был помочь.
>
> Впрочем, в любом случае сейчас уже закоммичен чуть более лучший
> патч, который заодно лечит аналогичную проблему в mail и улучшает
> ситуацию в stream, тут:
>
> http://hg.nginx.org/nginx/rev/9c29644f6d03
>
> Релиз с ним (1.3.7) будет завтра.

Err, 1.13.6 и 1.13.7 соответственно, конечно же.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
Viewing all 53287 articles
Browse latest View live