Just one quick question. Does Nginx check if the upstream servers are reachable via the specified protocol, during the reload process? If say, in this case the upstreams are not accepting ssl connections, will the reload fail?
↧
Re: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS
↧
Re: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS
Hello!
On Mon, Nov 20, 2017 at 12:46:31PM -0500, shivramg94 wrote:
> I am trying to use nginx as a reverse proxy with upstream SSL. For this, I
> am using the below directive in the nginx configuration file
>
> proxy_pass https://<upstream_block_file_name>;
>
> where "<upstream_block_file_name>" is another file which has the list of
> upstream servers.
>
> upstream <upstream_block_file_name> {
> server <IP_address_of_upstream_server>:<Port> weight=1;
> keepalive 100;
> }
>
> With this configuration if I try to reload the Nginx configuration, it fails
> intermittently with the below error message
>
> nginx: [emerg] host not found in upstream \"<upstream_block_file_name>\"
>
> However, if I changed the protocol mentioned in the proxy_pass directive
> from https to http, then the reload goes through.
>
> Could anyone please explain what mistake I might be doing here?
Most likely you are trying to use the same upstream block in both
"proxy_pass http://..." and "proxy_pass https://...", and define
upstream after it is used in proxy_pass. That is, your
configuration is essentially as follows:
server { location / { proxy_pass http://u; } ... }
server { location / { proxy_pass https://u; } ... }
upstream u { server 127.0.0.1:8080; }
Due to implementation details this won't properly use upstream "u"
in both first and second servers (some additional details can be
found at https://trac.nginx.org/nginx/ticket/1059).
Trivial fix is to move upstream block before the servers, that is,
to define it before it is used. Note though that this will result
in an incorrect configuration, as the same server (127.0.0.1:8080
in the above example) will be used for both http and https
connections, and this is not going to work either for http or for
https, depending on how the backend is configured. Instead, you
probably want to define two distinct upstream blocks for http and
https with different ports.
--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
On Mon, Nov 20, 2017 at 12:46:31PM -0500, shivramg94 wrote:
> I am trying to use nginx as a reverse proxy with upstream SSL. For this, I
> am using the below directive in the nginx configuration file
>
> proxy_pass https://<upstream_block_file_name>;
>
> where "<upstream_block_file_name>" is another file which has the list of
> upstream servers.
>
> upstream <upstream_block_file_name> {
> server <IP_address_of_upstream_server>:<Port> weight=1;
> keepalive 100;
> }
>
> With this configuration if I try to reload the Nginx configuration, it fails
> intermittently with the below error message
>
> nginx: [emerg] host not found in upstream \"<upstream_block_file_name>\"
>
> However, if I changed the protocol mentioned in the proxy_pass directive
> from https to http, then the reload goes through.
>
> Could anyone please explain what mistake I might be doing here?
Most likely you are trying to use the same upstream block in both
"proxy_pass http://..." and "proxy_pass https://...", and define
upstream after it is used in proxy_pass. That is, your
configuration is essentially as follows:
server { location / { proxy_pass http://u; } ... }
server { location / { proxy_pass https://u; } ... }
upstream u { server 127.0.0.1:8080; }
Due to implementation details this won't properly use upstream "u"
in both first and second servers (some additional details can be
found at https://trac.nginx.org/nginx/ticket/1059).
Trivial fix is to move upstream block before the servers, that is,
to define it before it is used. Note though that this will result
in an incorrect configuration, as the same server (127.0.0.1:8080
in the above example) will be used for both http and https
connections, and this is not going to work either for http or for
https, depending on how the backend is configured. Instead, you
probably want to define two distinct upstream blocks for http and
https with different ports.
--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
↧
↧
Re: Kubernetes ingress
>Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет, нужно
именно накладывать руками и пересобирать nginx. Если речь именно про
websocket'ы, то он должен был помочь.
Не забыл. Я взял ваш докер образ
https://github.com/nginxinc/docker-nginx/blob/3ba04e37d8f9ed7709fd30bf4dc6c36554e578ac/mainline/stretch/Dockerfile
, сделал чтобы для amd64 тоже с исходников собирался nginx, добавил туда
наложение патча перед компиляцией. После чего собрал контейнер с ingress'ом
использую контейнер с патченым nginx'ом.
Завтра днем перепроверю конечно. И заодно попробую с 1.13.7
Благодарю!
20 ноября 2017 г., 20:25 пользователь Maxim Dounin <mdounin@mdounin.ru>
написал:
> Hello!
>
> On Mon, Nov 20, 2017 at 08:03:22PM +0300, Maxim Dounin wrote:
>
> > Hello!
> >
> > On Mon, Nov 20, 2017 at 04:28:04PM +0200, Sargas wrote:
> >
> > > Патч не помог, проверял со свежей версией nginx. Больше 10-ти минут
> воркеры
> > > находятся в nginx: worker process is shutting down
> > >
> > > # nginx -V
> > > nginx version: nginx/1.13.6
> >
> > [...]
> >
> > Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет,
> > нужно именно накладывать руками и пересобирать nginx. Если речь
> > именно про websocket'ы, то он должен был помочь.
> >
> > Впрочем, в любом случае сейчас уже закоммичен чуть более лучший
> > патч, который заодно лечит аналогичную проблему в mail и улучшает
> > ситуацию в stream, тут:
> >
> > http://hg.nginx.org/nginx/rev/9c29644f6d03
> >
> > Релиз с ним (1.3.7) будет завтра.
>
> Err, 1.13.6 и 1.13.7 соответственно, конечно же.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru
>
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
именно накладывать руками и пересобирать nginx. Если речь именно про
websocket'ы, то он должен был помочь.
Не забыл. Я взял ваш докер образ
https://github.com/nginxinc/docker-nginx/blob/3ba04e37d8f9ed7709fd30bf4dc6c36554e578ac/mainline/stretch/Dockerfile
, сделал чтобы для amd64 тоже с исходников собирался nginx, добавил туда
наложение патча перед компиляцией. После чего собрал контейнер с ingress'ом
использую контейнер с патченым nginx'ом.
Завтра днем перепроверю конечно. И заодно попробую с 1.13.7
Благодарю!
20 ноября 2017 г., 20:25 пользователь Maxim Dounin <mdounin@mdounin.ru>
написал:
> Hello!
>
> On Mon, Nov 20, 2017 at 08:03:22PM +0300, Maxim Dounin wrote:
>
> > Hello!
> >
> > On Mon, Nov 20, 2017 at 04:28:04PM +0200, Sargas wrote:
> >
> > > Патч не помог, проверял со свежей версией nginx. Больше 10-ти минут
> воркеры
> > > находятся в nginx: worker process is shutting down
> > >
> > > # nginx -V
> > > nginx version: nginx/1.13.6
> >
> > [...]
> >
> > Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет,
> > нужно именно накладывать руками и пересобирать nginx. Если речь
> > именно про websocket'ы, то он должен был помочь.
> >
> > Впрочем, в любом случае сейчас уже закоммичен чуть более лучший
> > патч, который заодно лечит аналогичную проблему в mail и улучшает
> > ситуацию в stream, тут:
> >
> > http://hg.nginx.org/nginx/rev/9c29644f6d03
> >
> > Релиз с ним (1.3.7) будет завтра.
>
> Err, 1.13.6 и 1.13.7 соответственно, конечно же.
>
> --
> Maxim Dounin
> http://mdounin.ru/
> _______________________________________________
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru
>
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
↧
Re: Kubernetes ingress
Проверил с патчем http://hg.nginx.org/nginx/rev/9c29644f6d03 - всё работает
как нужно. Через минуту старые воркеры завершают свою работу.
Спасибо Максим!
20 ноября 2017 г., 22:15 пользователь Sargas <sargaskn@gmail.com> написал:
> >Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет, нужно
> именно накладывать руками и пересобирать nginx. Если речь именно про
> websocket'ы, то он должен был помочь.
> Не забыл. Я взял ваш докер образ https://github.com/nginxinc/
> docker-nginx/blob/3ba04e37d8f9ed7709fd30bf4dc6c3
> 6554e578ac/mainline/stretch/Dockerfile , сделал чтобы для amd64 тоже с
> исходников собирался nginx, добавил туда наложение патча перед компиляцией.
> После чего собрал контейнер с ingress'ом использую контейнер с патченым
> nginx'ом.
>
> Завтра днем перепроверю конечно. И заодно попробую с 1.13.7
> Благодарю!
>
> 20 ноября 2017 г., 20:25 пользователь Maxim Dounin <mdounin@mdounin.ru>
> написал:
>
> Hello!
>>
>> On Mon, Nov 20, 2017 at 08:03:22PM +0300, Maxim Dounin wrote:
>>
>> > Hello!
>> >
>> > On Mon, Nov 20, 2017 at 04:28:04PM +0200, Sargas wrote:
>> >
>> > > Патч не помог, проверял со свежей версией nginx. Больше 10-ти минут
>> воркеры
>> > > находятся в nginx: worker process is shutting down
>> > >
>> > > # nginx -V
>> > > nginx version: nginx/1.13.6
>> >
>> > [...]
>> >
>> > Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет,
>> > нужно именно накладывать руками и пересобирать nginx. Если речь
>> > именно про websocket'ы, то он должен был помочь.
>> >
>> > Впрочем, в любом случае сейчас уже закоммичен чуть более лучший
>> > патч, который заодно лечит аналогичную проблему в mail и улучшает
>> > ситуацию в stream, тут:
>> >
>> > http://hg.nginx.org/nginx/rev/9c29644f6d03
>> >
>> > Релиз с ним (1.3.7) будет завтра.
>>
>> Err, 1.13.6 и 1.13.7 соответственно, конечно же.
>>
>> --
>> Maxim Dounin
>> http://mdounin.ru/
>> _______________________________________________
>> nginx-ru mailing list
>> nginx-ru@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-ru
>>
>
>
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
как нужно. Через минуту старые воркеры завершают свою работу.
Спасибо Максим!
20 ноября 2017 г., 22:15 пользователь Sargas <sargaskn@gmail.com> написал:
> >Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет, нужно
> именно накладывать руками и пересобирать nginx. Если речь именно про
> websocket'ы, то он должен был помочь.
> Не забыл. Я взял ваш докер образ https://github.com/nginxinc/
> docker-nginx/blob/3ba04e37d8f9ed7709fd30bf4dc6c3
> 6554e578ac/mainline/stretch/Dockerfile , сделал чтобы для amd64 тоже с
> исходников собирался nginx, добавил туда наложение патча перед компиляцией.
> После чего собрал контейнер с ingress'ом использую контейнер с патченым
> nginx'ом.
>
> Завтра днем перепроверю конечно. И заодно попробую с 1.13.7
> Благодарю!
>
> 20 ноября 2017 г., 20:25 пользователь Maxim Dounin <mdounin@mdounin.ru>
> написал:
>
> Hello!
>>
>> On Mon, Nov 20, 2017 at 08:03:22PM +0300, Maxim Dounin wrote:
>>
>> > Hello!
>> >
>> > On Mon, Nov 20, 2017 at 04:28:04PM +0200, Sargas wrote:
>> >
>> > > Патч не помог, проверял со свежей версией nginx. Больше 10-ти минут
>> воркеры
>> > > находятся в nginx: worker process is shutting down
>> > >
>> > > # nginx -V
>> > > nginx version: nginx/1.13.6
>> >
>> > [...]
>> >
>> > Патч-то наложить не забыли? На всякий случае: в 1.3.6 патча нет,
>> > нужно именно накладывать руками и пересобирать nginx. Если речь
>> > именно про websocket'ы, то он должен был помочь.
>> >
>> > Впрочем, в любом случае сейчас уже закоммичен чуть более лучший
>> > патч, который заодно лечит аналогичную проблему в mail и улучшает
>> > ситуацию в stream, тут:
>> >
>> > http://hg.nginx.org/nginx/rev/9c29644f6d03
>> >
>> > Релиз с ним (1.3.7) будет завтра.
>>
>> Err, 1.13.6 и 1.13.7 соответственно, конечно же.
>>
>> --
>> Maxim Dounin
>> http://mdounin.ru/
>> _______________________________________________
>> nginx-ru mailing list
>> nginx-ru@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx-ru
>>
>
>
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
↧
Помогите разобраться с proxy_pass uri decode
Уже, кажется, все идеи перепробовал, ничего не помогает.
Попробую максимально точно описать проблему: На вход фронтенда приходит урл с encoded символами, среди которых есть %20. На proxy_pass этот %20 обращается обратно в пробел и всё ломается.
В простейшей конфигурации имеем:
Nginx:
location ~ ^/api(.*) {
proxy_pass http://backend/api.php?q=$1;
}
Apache (backend):
"GET /api.php?q=blabla1 blabla2"...
Ну и в логе ошибка "/api.php?q=blabla1 не валидный запрос без blabla2".
Я уже бессчётное количестко подходов сделал к экранированию и переписыванию переменных, нужен divine intervention, который скажет, как правильно, видимо.
nginx version: nginx/1.10.2
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_secure_link_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --with-threads --add-module=/build/nginx-1.10.2/debian/modules/headers-more-nginx-module --add-module=/build/nginx-1.10.2/debian/modules/nginx-auth-pam --add-module=/build/nginx-1.10.2/debian/modules/nginx-cache-purge --add-module=/build/nginx-1.10.2/debian/modules/nginx-dav-ext-module --add-module=/build/nginx-1.10.2/debian/modules/nginx-development-kit --add-module=/build/nginx-1.10.2/debian/modules/nginx-echo --add-module=/build/nginx-1.10.2/debian/modules/ngx-fancyindex --add-module=/build/nginx-1.10.2/debian/modules/nginx-http-push --add-module=/build/nginx-1.10.2/debian/modules/nginx-lua --add-module=/build/nginx-1.10.2/debian/modules/nginx-upload-progress --add-module=/build/nginx-1.10.2/debian/modules/nginx-upstream-fair --add-module=/build/nginx-1.10.2/debian/modules/ngx_http_substitutions_filter_module --add-module=/build/nginx-1.10.2/debian/modules/nginx_http_upstream_check_module --add-module=/build/nginx-1.10.2/debian/modules/graphite-nginx-module --add-module=/build/nginx-1.10.2/debian/modules/nginx-module-vts --add-module=/build/nginx-1.10.2/debian/modules/nginx-fluentd-module
Попробую максимально точно описать проблему: На вход фронтенда приходит урл с encoded символами, среди которых есть %20. На proxy_pass этот %20 обращается обратно в пробел и всё ломается.
В простейшей конфигурации имеем:
Nginx:
location ~ ^/api(.*) {
proxy_pass http://backend/api.php?q=$1;
}
Apache (backend):
"GET /api.php?q=blabla1 blabla2"...
Ну и в логе ошибка "/api.php?q=blabla1 не валидный запрос без blabla2".
Я уже бессчётное количестко подходов сделал к экранированию и переписыванию переменных, нужен divine intervention, который скажет, как правильно, видимо.
nginx version: nginx/1.10.2
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_secure_link_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --with-threads --add-module=/build/nginx-1.10.2/debian/modules/headers-more-nginx-module --add-module=/build/nginx-1.10.2/debian/modules/nginx-auth-pam --add-module=/build/nginx-1.10.2/debian/modules/nginx-cache-purge --add-module=/build/nginx-1.10.2/debian/modules/nginx-dav-ext-module --add-module=/build/nginx-1.10.2/debian/modules/nginx-development-kit --add-module=/build/nginx-1.10.2/debian/modules/nginx-echo --add-module=/build/nginx-1.10.2/debian/modules/ngx-fancyindex --add-module=/build/nginx-1.10.2/debian/modules/nginx-http-push --add-module=/build/nginx-1.10.2/debian/modules/nginx-lua --add-module=/build/nginx-1.10.2/debian/modules/nginx-upload-progress --add-module=/build/nginx-1.10.2/debian/modules/nginx-upstream-fair --add-module=/build/nginx-1.10.2/debian/modules/ngx_http_substitutions_filter_module --add-module=/build/nginx-1.10.2/debian/modules/nginx_http_upstream_check_module --add-module=/build/nginx-1.10.2/debian/modules/graphite-nginx-module --add-module=/build/nginx-1.10.2/debian/modules/nginx-module-vts --add-module=/build/nginx-1.10.2/debian/modules/nginx-fluentd-module
↧
↧
migrating from apache, need help on some rewrite rules
Hello,
I would need some help to convert some apache rewrite rules to nginx,
here is the apache version:
<Location ^/mywebapp>
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
Allow from 192.168.0.0/16
Allow from 10.10.0.0/16
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} !app_dev\.php/.*
RewriteCond %{REQUEST_URI} !app\.php$
RewriteRule (.*) app.php [QSA,L]
</Location>
here is what I have in nginx:
location ~ ^/mywebapp {
allow 127.0.0.1;
allow 192.168.0.0/16;
allow 10.10.0.0/16;
deny all;
location ~ app_dev\.php/.* { }
location ~ app\.php$ { }
if (!-e $request_filename){
rewrite ^(.*)$ app.php break;
}
}
which does not work as intended. Can someone point me where I'm wrong ?
Kind Regards
Nicolas
I would need some help to convert some apache rewrite rules to nginx,
here is the apache version:
<Location ^/mywebapp>
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
Allow from 192.168.0.0/16
Allow from 10.10.0.0/16
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} !app_dev\.php/.*
RewriteCond %{REQUEST_URI} !app\.php$
RewriteRule (.*) app.php [QSA,L]
</Location>
here is what I have in nginx:
location ~ ^/mywebapp {
allow 127.0.0.1;
allow 192.168.0.0/16;
allow 10.10.0.0/16;
deny all;
location ~ app_dev\.php/.* { }
location ~ app\.php$ { }
if (!-e $request_filename){
rewrite ^(.*)$ app.php break;
}
}
which does not work as intended. Can someone point me where I'm wrong ?
Kind Regards
Nicolas
↧
Re: Помогите разобраться с proxy_pass uri decode
bodomic Wrote:
-------------------------------------------------------
> Уже, кажется, все идеи перепробовал, ничего не помогает.
> Попробую максимально точно описать проблему: На вход фронтенда
> приходит урл с encoded символами, среди которых есть %20. На
> proxy_pass этот %20 обращается обратно в пробел и всё ломается.
в аналогичной ситуации, я устал искать решение и стал передавать через заголовки, благо был доступ и к фронту и к бекенду
я про proxy_set_header
-------------------------------------------------------
> Уже, кажется, все идеи перепробовал, ничего не помогает.
> Попробую максимально точно описать проблему: На вход фронтенда
> приходит урл с encoded символами, среди которых есть %20. На
> proxy_pass этот %20 обращается обратно в пробел и всё ломается.
в аналогичной ситуации, я устал искать решение и стал передавать через заголовки, благо был доступ и к фронту и к бекенду
я про proxy_set_header
↧
Re: Помогите разобраться с proxy pass uri decode
Hello!
On Tue, Nov 21, 2017 at 05:11:54AM -0500, bodomic wrote:
> Уже, кажется, все идеи перепробовал, ничего не помогает.
> Попробую максимально точно описать проблему: На вход фронтенда приходит урл
> с encoded символами, среди которых есть %20. На proxy_pass этот %20
> обращается обратно в пробел и всё ломается.
> В простейшей конфигурации имеем:
> Nginx:
>
> location ~ ^/api(.*) {
> proxy_pass http://backend/api.php?q=$1;
> }
>
> Apache (backend):
> "GET /api.php?q=blabla1 blabla2"...
>
>
> Ну и в логе ошибка "/api.php?q=blabla1 не валидный запрос без blabla2".
> Я уже бессчётное количестко подходов сделал к экранированию и переписыванию
> переменных, нужен divine intervention, который скажет, как правильно,
> видимо.
Проблема в том, что location работает с раскодированным URI
запроса (и соответственно в $1 попадает раскодированная часть
URI), а proxy_pass с переменными ожидает полностью сформированный
и правильно закодированный URI, как например в конструкции
proxy_pass http://127.0.0.1$request_uri;
Для задачи "поменять URI запроса на /api.php?q=..." проще всего
использовать rewrite, благо он умеет правильно кодировать URI при
его изменении.
Как-то так должно заработать (untested):
location /api/ {
rewrite ^/api(/.*) /api.php?q=$1? break;
proxy_pass http://backend;
}
--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
On Tue, Nov 21, 2017 at 05:11:54AM -0500, bodomic wrote:
> Уже, кажется, все идеи перепробовал, ничего не помогает.
> Попробую максимально точно описать проблему: На вход фронтенда приходит урл
> с encoded символами, среди которых есть %20. На proxy_pass этот %20
> обращается обратно в пробел и всё ломается.
> В простейшей конфигурации имеем:
> Nginx:
>
> location ~ ^/api(.*) {
> proxy_pass http://backend/api.php?q=$1;
> }
>
> Apache (backend):
> "GET /api.php?q=blabla1 blabla2"...
>
>
> Ну и в логе ошибка "/api.php?q=blabla1 не валидный запрос без blabla2".
> Я уже бессчётное количестко подходов сделал к экранированию и переписыванию
> переменных, нужен divine intervention, который скажет, как правильно,
> видимо.
Проблема в том, что location работает с раскодированным URI
запроса (и соответственно в $1 попадает раскодированная часть
URI), а proxy_pass с переменными ожидает полностью сформированный
и правильно закодированный URI, как например в конструкции
proxy_pass http://127.0.0.1$request_uri;
Для задачи "поменять URI запроса на /api.php?q=..." проще всего
использовать rewrite, благо он умеет правильно кодировать URI при
его изменении.
Как-то так должно заработать (untested):
location /api/ {
rewrite ^/api(/.*) /api.php?q=$1? break;
proxy_pass http://backend;
}
--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
↧
[nginx] Updated OpenSSL used for win32 builds.
details: http://hg.nginx.org/nginx/rev/1af00446f23e
branches:
changeset: 7157:1af00446f23e
user: Maxim Dounin <mdounin@mdounin.ru>
date: Tue Nov 21 17:32:12 2017 +0300
description:
Updated OpenSSL used for win32 builds.
diffstat:
misc/GNUmakefile | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diffs (12 lines):
diff --git a/misc/GNUmakefile b/misc/GNUmakefile
--- a/misc/GNUmakefile
+++ b/misc/GNUmakefile
@@ -6,7 +6,7 @@ TEMP = tmp
CC = cl
OBJS = objs.msvc8
-OPENSSL = openssl-1.0.2l
+OPENSSL = openssl-1.0.2m
ZLIB = zlib-1.2.11
PCRE = pcre-8.41
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
branches:
changeset: 7157:1af00446f23e
user: Maxim Dounin <mdounin@mdounin.ru>
date: Tue Nov 21 17:32:12 2017 +0300
description:
Updated OpenSSL used for win32 builds.
diffstat:
misc/GNUmakefile | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diffs (12 lines):
diff --git a/misc/GNUmakefile b/misc/GNUmakefile
--- a/misc/GNUmakefile
+++ b/misc/GNUmakefile
@@ -6,7 +6,7 @@ TEMP = tmp
CC = cl
OBJS = objs.msvc8
-OPENSSL = openssl-1.0.2l
+OPENSSL = openssl-1.0.2m
ZLIB = zlib-1.2.11
PCRE = pcre-8.41
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
↧
↧
[nginx] nginx-1.13.7-RELEASE
details: http://hg.nginx.org/nginx/rev/47cca243d0ed
branches:
changeset: 7158:47cca243d0ed
user: Maxim Dounin <mdounin@mdounin.ru>
date: Tue Nov 21 18:09:43 2017 +0300
description:
nginx-1.13.7-RELEASE
diffstat:
docs/xml/nginx/changes.xml | 83 ++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 83 insertions(+), 0 deletions(-)
diffs (93 lines):
diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml
--- a/docs/xml/nginx/changes.xml
+++ b/docs/xml/nginx/changes.xml
@@ -5,6 +5,89 @@
<change_log title="nginx">
+<changes ver="1.13.7" date="2017-11-21">
+
+<change type="bugfix">
+<para lang="ru">
+в переменной $upstream_status.
+</para>
+<para lang="en">
+in the $upstream_status variable.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+в рабочем процессе мог произойти segmentation fault,
+если бэкенд возвращал ответ "101 Switching Protocols" на подзапрос.
+</para>
+<para lang="en">
+a segmentation fault might occur in a worker process
+if a backend returned a "101 Switching Protocols" response to a subrequest.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+если при переконфигурации изменялся размер зоны разделяемой памяти
+и переконфигурация завершалась неудачно,
+то в главном процессе происходил segmentation fault.
+</para>
+<para lang="en">
+a segmentation fault occurred in a master process
+if a shared memory zone size was changed during a reconfiguration
+and the reconfiguration failed.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+в модуле ngx_http_fastcgi_module.
+</para>
+<para lang="en">
+in the ngx_http_fastcgi_module.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+nginx возвращал ошибку 500,
+если в директиве xslt_stylesheet
+были заданы параметры без использования переменных.
+</para>
+<para lang="en">
+nginx returned the 500 error
+if parameters without variables were specified
+in the "xslt_stylesheet" directive.
+</para>
+</change>
+
+<change type="workaround">
+<para lang="ru">
+при использовании варианта библиотеки zlib от Intel
+в лог писались сообщения "gzip filter failed to use preallocated memory".
+</para>
+<para lang="en">
+"gzip filter failed to use preallocated memory" alerts appeared in logs
+when using a zlib library variant from Intel.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+директива worker_shutdown_timeout не работала
+при использовании почтового прокси-сервера
+и при проксировании WebSocket-соединений.
+</para>
+<para lang="en">
+the "worker_shutdown_timeout" directive did not work
+when using mail proxy and when proxying WebSocket connections.
+</para>
+</change>
+
+</changes>
+
+
<changes ver="1.13.6" date="2017-10-10">
<change type="bugfix">
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
branches:
changeset: 7158:47cca243d0ed
user: Maxim Dounin <mdounin@mdounin.ru>
date: Tue Nov 21 18:09:43 2017 +0300
description:
nginx-1.13.7-RELEASE
diffstat:
docs/xml/nginx/changes.xml | 83 ++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 83 insertions(+), 0 deletions(-)
diffs (93 lines):
diff --git a/docs/xml/nginx/changes.xml b/docs/xml/nginx/changes.xml
--- a/docs/xml/nginx/changes.xml
+++ b/docs/xml/nginx/changes.xml
@@ -5,6 +5,89 @@
<change_log title="nginx">
+<changes ver="1.13.7" date="2017-11-21">
+
+<change type="bugfix">
+<para lang="ru">
+в переменной $upstream_status.
+</para>
+<para lang="en">
+in the $upstream_status variable.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+в рабочем процессе мог произойти segmentation fault,
+если бэкенд возвращал ответ "101 Switching Protocols" на подзапрос.
+</para>
+<para lang="en">
+a segmentation fault might occur in a worker process
+if a backend returned a "101 Switching Protocols" response to a subrequest.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+если при переконфигурации изменялся размер зоны разделяемой памяти
+и переконфигурация завершалась неудачно,
+то в главном процессе происходил segmentation fault.
+</para>
+<para lang="en">
+a segmentation fault occurred in a master process
+if a shared memory zone size was changed during a reconfiguration
+and the reconfiguration failed.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+в модуле ngx_http_fastcgi_module.
+</para>
+<para lang="en">
+in the ngx_http_fastcgi_module.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+nginx возвращал ошибку 500,
+если в директиве xslt_stylesheet
+были заданы параметры без использования переменных.
+</para>
+<para lang="en">
+nginx returned the 500 error
+if parameters without variables were specified
+in the "xslt_stylesheet" directive.
+</para>
+</change>
+
+<change type="workaround">
+<para lang="ru">
+при использовании варианта библиотеки zlib от Intel
+в лог писались сообщения "gzip filter failed to use preallocated memory".
+</para>
+<para lang="en">
+"gzip filter failed to use preallocated memory" alerts appeared in logs
+when using a zlib library variant from Intel.
+</para>
+</change>
+
+<change type="bugfix">
+<para lang="ru">
+директива worker_shutdown_timeout не работала
+при использовании почтового прокси-сервера
+и при проксировании WebSocket-соединений.
+</para>
+<para lang="en">
+the "worker_shutdown_timeout" directive did not work
+when using mail proxy and when proxying WebSocket connections.
+</para>
+</change>
+
+</changes>
+
+
<changes ver="1.13.6" date="2017-10-10">
<change type="bugfix">
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
↧
[nginx] release-1.13.7 tag
details: http://hg.nginx.org/nginx/rev/679ea950eae9
branches:
changeset: 7159:679ea950eae9
user: Maxim Dounin <mdounin@mdounin.ru>
date: Tue Nov 21 18:09:44 2017 +0300
description:
release-1.13.7 tag
diffstat:
.hgtags | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diffs (8 lines):
diff --git a/.hgtags b/.hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -419,3 +419,4 @@ 8457ce87640f9bfe6221c4ac4466ced20e03bebe
bbc642c813c829963ce8197c0ca237ab7601f3d4 release-1.13.4
0d45b4cf7c2e4e626a5a16e1fe604402ace1cea5 release-1.13.5
f87da7d9ca02b8ced4caa6c5eb9013ccd47b0117 release-1.13.6
+47cca243d0ed39bf5dcb9859184affc958b79b6f release-1.13.7
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
branches:
changeset: 7159:679ea950eae9
user: Maxim Dounin <mdounin@mdounin.ru>
date: Tue Nov 21 18:09:44 2017 +0300
description:
release-1.13.7 tag
diffstat:
.hgtags | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diffs (8 lines):
diff --git a/.hgtags b/.hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -419,3 +419,4 @@ 8457ce87640f9bfe6221c4ac4466ced20e03bebe
bbc642c813c829963ce8197c0ca237ab7601f3d4 release-1.13.4
0d45b4cf7c2e4e626a5a16e1fe604402ace1cea5 release-1.13.5
f87da7d9ca02b8ced4caa6c5eb9013ccd47b0117 release-1.13.6
+47cca243d0ed39bf5dcb9859184affc958b79b6f release-1.13.7
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
↧
[nginx-announce] nginx-1.13.7
Changes with nginx 1.13.7 21 Nov 2017
*) Bugfix: in the $upstream_status variable.
*) Bugfix: a segmentation fault might occur in a worker process if a
backend returned a "101 Switching Protocols" response to a
subrequest.
*) Bugfix: a segmentation fault occurred in a master process if a shared
memory zone size was changed during a reconfiguration and the
reconfiguration failed.
*) Bugfix: in the ngx_http_fastcgi_module.
*) Bugfix: nginx returned the 500 error if parameters without variables
were specified in the "xslt_stylesheet" directive.
*) Workaround: "gzip filter failed to use preallocated memory" alerts
appeared in logs when using a zlib library variant from Intel.
*) Bugfix: the "worker_shutdown_timeout" directive did not work when
using mail proxy and when proxying WebSocket connections.
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx-announce mailing list
nginx-announce@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-announce
*) Bugfix: in the $upstream_status variable.
*) Bugfix: a segmentation fault might occur in a worker process if a
backend returned a "101 Switching Protocols" response to a
subrequest.
*) Bugfix: a segmentation fault occurred in a master process if a shared
memory zone size was changed during a reconfiguration and the
reconfiguration failed.
*) Bugfix: in the ngx_http_fastcgi_module.
*) Bugfix: nginx returned the 500 error if parameters without variables
were specified in the "xslt_stylesheet" directive.
*) Workaround: "gzip filter failed to use preallocated memory" alerts
appeared in logs when using a zlib library variant from Intel.
*) Bugfix: the "worker_shutdown_timeout" directive did not work when
using mail proxy and when proxying WebSocket connections.
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx-announce mailing list
nginx-announce@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-announce
↧
nginx-1.13.7
Changes with nginx 1.13.7 21 Nov 2017
*) Bugfix: in the $upstream_status variable.
*) Bugfix: a segmentation fault might occur in a worker process if a
backend returned a "101 Switching Protocols" response to a
subrequest.
*) Bugfix: a segmentation fault occurred in a master process if a shared
memory zone size was changed during a reconfiguration and the
reconfiguration failed.
*) Bugfix: in the ngx_http_fastcgi_module.
*) Bugfix: nginx returned the 500 error if parameters without variables
were specified in the "xslt_stylesheet" directive.
*) Workaround: "gzip filter failed to use preallocated memory" alerts
appeared in logs when using a zlib library variant from Intel.
*) Bugfix: the "worker_shutdown_timeout" directive did not work when
using mail proxy and when proxying WebSocket connections.
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
*) Bugfix: in the $upstream_status variable.
*) Bugfix: a segmentation fault might occur in a worker process if a
backend returned a "101 Switching Protocols" response to a
subrequest.
*) Bugfix: a segmentation fault occurred in a master process if a shared
memory zone size was changed during a reconfiguration and the
reconfiguration failed.
*) Bugfix: in the ngx_http_fastcgi_module.
*) Bugfix: nginx returned the 500 error if parameters without variables
were specified in the "xslt_stylesheet" directive.
*) Workaround: "gzip filter failed to use preallocated memory" alerts
appeared in logs when using a zlib library variant from Intel.
*) Bugfix: the "worker_shutdown_timeout" directive did not work when
using mail proxy and when proxying WebSocket connections.
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
↧
↧
[nginx-ru-announce] nginx-1.13.7
Изменения в nginx 1.13.7 21.11.2017
*) Исправление: в переменной $upstream_status.
*) Исправление: в рабочем процессе мог произойти segmentation fault,
если бэкенд возвращал ответ "101 Switching Protocols" на подзапрос.
*) Исправление: если при переконфигурации изменялся размер зоны
разделяемой памяти и переконфигурация завершалась неудачно, то в
главном процессе происходил segmentation fault.
*) Исправление: в модуле ngx_http_fastcgi_module.
*) Исправление: nginx возвращал ошибку 500, если в директиве
xslt_stylesheet были заданы параметры без использования переменных.
*) Изменение: при использовании варианта библиотеки zlib от Intel в лог
писались сообщения "gzip filter failed to use preallocated memory".
*) Исправление: директива worker_shutdown_timeout не работала при
использовании почтового прокси-сервера и при проксировании
WebSocket-соединений.
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx-ru-announce mailing list
nginx-ru-announce@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru-announce
*) Исправление: в переменной $upstream_status.
*) Исправление: в рабочем процессе мог произойти segmentation fault,
если бэкенд возвращал ответ "101 Switching Protocols" на подзапрос.
*) Исправление: если при переконфигурации изменялся размер зоны
разделяемой памяти и переконфигурация завершалась неудачно, то в
главном процессе происходил segmentation fault.
*) Исправление: в модуле ngx_http_fastcgi_module.
*) Исправление: nginx возвращал ошибку 500, если в директиве
xslt_stylesheet были заданы параметры без использования переменных.
*) Изменение: при использовании варианта библиотеки zlib от Intel в лог
писались сообщения "gzip filter failed to use preallocated memory".
*) Исправление: директива worker_shutdown_timeout не работала при
использовании почтового прокси-сервера и при проксировании
WebSocket-соединений.
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx-ru-announce mailing list
nginx-ru-announce@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru-announce
↧
nginx-1.13.7
Изменения в nginx 1.13.7 21.11.2017
*) Исправление: в переменной $upstream_status.
*) Исправление: в рабочем процессе мог произойти segmentation fault,
если бэкенд возвращал ответ "101 Switching Protocols" на подзапрос.
*) Исправление: если при переконфигурации изменялся размер зоны
разделяемой памяти и переконфигурация завершалась неудачно, то в
главном процессе происходил segmentation fault.
*) Исправление: в модуле ngx_http_fastcgi_module.
*) Исправление: nginx возвращал ошибку 500, если в директиве
xslt_stylesheet были заданы параметры без использования переменных.
*) Изменение: при использовании варианта библиотеки zlib от Intel в лог
писались сообщения "gzip filter failed to use preallocated memory".
*) Исправление: директива worker_shutdown_timeout не работала при
использовании почтового прокси-сервера и при проксировании
WebSocket-соединений.
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
*) Исправление: в переменной $upstream_status.
*) Исправление: в рабочем процессе мог произойти segmentation fault,
если бэкенд возвращал ответ "101 Switching Protocols" на подзапрос.
*) Исправление: если при переконфигурации изменялся размер зоны
разделяемой памяти и переконфигурация завершалась неудачно, то в
главном процессе происходил segmentation fault.
*) Исправление: в модуле ngx_http_fastcgi_module.
*) Исправление: nginx возвращал ошибку 500, если в директиве
xslt_stylesheet были заданы параметры без использования переменных.
*) Изменение: при использовании варианта библиотеки zlib от Intel в лог
писались сообщения "gzip filter failed to use preallocated memory".
*) Исправление: директива worker_shutdown_timeout не работала при
использовании почтового прокси-сервера и при проксировании
WebSocket-соединений.
--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
↧
Re: gzip filter failed to use preallocated memory
> Возможно. Я отправил его коллегам на review, если возражений не
> будет - закоммичу.
Спасибо!
> будет - закоммичу.
Спасибо!
↧
Re: [patch-1] Range filter: support multiple ranges.
Hi,
After some attempts, I found it is still too hard for me if the
request ranges in no particular order. Looking forward to your code.
Anyway, under your guidance, my changes are as follows:
# HG changeset patch
# User hucongcong <hucong.c@foxmail.com>
# Date 1510309868 -28800
# Fri Nov 10 18:31:08 2017 +0800
# Node ID 5c327973a284849a18c042fa6e7e191268b94bac
# Parent 32f83fe5747b55ef341595b18069bee3891874d0
Range filter: better support for multipart ranges.
Introducing support for multipart ranges if the response body is
not in the single buffer as long as requested ranges do not overlap
and properly ordered.
diff -r 32f83fe5747b -r 5c327973a284 src/http/modules/ngx_http_range_filter_module.c
--- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800
+++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 18:31:08 2017 +0800
@@ -54,6 +54,7 @@ typedef struct {
typedef struct {
off_t offset;
+ ngx_uint_t index;
ngx_str_t boundary_header;
ngx_array_t ranges;
} ngx_http_range_filter_ctx_t;
@@ -66,12 +67,14 @@ static ngx_int_t ngx_http_range_singlepa
static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx);
static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r);
-static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r,
- ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
+static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r,
+ ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll);
+static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r,
+ ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll);
static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf);
static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf);
@@ -222,6 +225,7 @@ parse:
return NGX_ERROR;
}
+ ctx->index = (ngx_uint_t) -1;
ctx->offset = r->headers_out.content_offset;
ranges = r->single_range ? 1 : clcf->max_ranges;
@@ -270,9 +274,8 @@ ngx_http_range_parse(ngx_http_request_t
ngx_uint_t ranges)
{
u_char *p;
- off_t start, end, size, content_length, cutoff,
- cutlim;
- ngx_uint_t suffix;
+ off_t start, end, content_length, cutoff, cutlim;
+ ngx_uint_t suffix, descending;
ngx_http_range_t *range;
ngx_http_range_filter_ctx_t *mctx;
@@ -281,6 +284,7 @@ ngx_http_range_parse(ngx_http_request_t
ngx_http_range_body_filter_module);
if (mctx) {
ctx->ranges = mctx->ranges;
+ ctx->boundary_header = mctx->boundary_header;
return NGX_OK;
}
}
@@ -292,7 +296,8 @@ ngx_http_range_parse(ngx_http_request_t
}
p = r->headers_in.range->value.data + 6;
- size = 0;
+ range = NULL;
+ descending = 0;
content_length = r->headers_out.content_length_n;
cutoff = NGX_MAX_OFF_T_VALUE / 10;
@@ -369,6 +374,11 @@ ngx_http_range_parse(ngx_http_request_t
found:
if (start < end) {
+
+ if (range && start < range->end) {
+ descending++;
+ }
+
range = ngx_array_push(&ctx->ranges);
if (range == NULL) {
return NGX_ERROR;
@@ -377,16 +387,6 @@ ngx_http_range_parse(ngx_http_request_t
range->start = start;
range->end = end;
- if (size > NGX_MAX_OFF_T_VALUE - (end - start)) {
- return NGX_HTTP_RANGE_NOT_SATISFIABLE;
- }
-
- size += end - start;
-
- if (ranges-- == 0) {
- return NGX_DECLINED;
- }
-
} else if (start == 0) {
return NGX_DECLINED;
}
@@ -400,7 +400,7 @@ ngx_http_range_parse(ngx_http_request_t
return NGX_HTTP_RANGE_NOT_SATISFIABLE;
}
- if (size > content_length) {
+ if (ctx->ranges.nelts > ranges || descending) {
return NGX_DECLINED;
}
@@ -469,6 +469,22 @@ ngx_http_range_multipart_header(ngx_http
ngx_http_range_t *range;
ngx_atomic_uint_t boundary;
+ if (ctx->index == (ngx_uint_t) -1) {
+ ctx->index = 0;
+ range = ctx->ranges.elts;
+
+ for (i = 0; i < ctx->ranges.nelts; i++) {
+ if (ctx->offset < range[i].end) {
+ ctx->index = i;
+ break;
+ }
+ }
+ }
+
+ if (r != r->main) {
+ return ngx_http_next_header_filter(r);
+ }
+
size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
+ sizeof(CRLF "Content-Type: ") - 1
+ r->headers_out.content_type.len
@@ -574,6 +590,7 @@ ngx_http_range_multipart_header(ngx_http
}
r->headers_out.content_length_n = len;
+ r->headers_out.content_offset = range[0].start;
if (r->headers_out.content_length) {
r->headers_out.content_length->hash = 0;
@@ -639,63 +656,11 @@ ngx_http_range_body_filter(ngx_http_requ
return ngx_http_range_singlepart_body(r, ctx, in);
}
- /*
- * multipart ranges are supported only if whole body is in a single buffer
- */
-
- if (ngx_buf_special(in->buf)) {
- return ngx_http_next_body_filter(r, in);
- }
-
- if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) {
- return NGX_ERROR;
- }
-
return ngx_http_range_multipart_body(r, ctx, in);
}
static ngx_int_t
-ngx_http_range_test_overlapped(ngx_http_request_t *r,
- ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
-{
- off_t start, last;
- ngx_buf_t *buf;
- ngx_uint_t i;
- ngx_http_range_t *range;
-
- if (ctx->offset) {
- goto overlapped;
- }
-
- buf = in->buf;
-
- if (!buf->last_buf) {
- start = ctx->offset;
- last = ctx->offset + ngx_buf_size(buf);
-
- range = ctx->ranges.elts;
- for (i = 0; i < ctx->ranges.nelts; i++) {
- if (start > range[i].start || last < range[i].end) {
- goto overlapped;
- }
- }
- }
-
- ctx->offset = ngx_buf_size(buf);
-
- return NGX_OK;
-
-overlapped:
-
- ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
- "range in overlapped buffers");
-
- return NGX_ERROR;
-}
-
-
-static ngx_int_t
ngx_http_range_singlepart_body(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
{
@@ -786,96 +751,206 @@ static ngx_int_t
ngx_http_range_multipart_body(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
{
- ngx_buf_t *b, *buf;
- ngx_uint_t i;
- ngx_chain_t *out, *hcl, *rcl, *dcl, **ll;
- ngx_http_range_t *range;
+ off_t start, last;
+ ngx_buf_t *buf, *b;
+ ngx_chain_t *out, *cl, *tl, **ll;
+ ngx_http_range_t *range, *tail;
+ out = NULL;
ll = &out;
- buf = in->buf;
+
range = ctx->ranges.elts;
-
- for (i = 0; i < ctx->ranges.nelts; i++) {
+ tail = range + ctx->ranges.nelts;
+ range += ctx->index;
- /*
- * The boundary header of the range:
- * CRLF
- * "--0123456789" CRLF
- * "Content-Type: image/jpeg" CRLF
- * "Content-Range: bytes "
- */
+ for (cl = in; cl; cl = cl->next) {
+
+ buf = cl->buf;
- b = ngx_calloc_buf(r->pool);
- if (b == NULL) {
- return NGX_ERROR;
+ start = ctx->offset;
+ last = ctx->offset + ngx_buf_size(buf);
+
+ ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+ "http range multipart body buf: %O-%O", start, last);
+
+ if (ngx_buf_special(buf)) {
+ continue;
}
- b->memory = 1;
- b->pos = ctx->boundary_header.data;
- b->last = ctx->boundary_header.data + ctx->boundary_header.len;
+ if (range->end <= start || range->start >= last) {
+
+ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+ "http range multipart body skip");
- hcl = ngx_alloc_chain_link(r->pool);
- if (hcl == NULL) {
- return NGX_ERROR;
+ if (buf->in_file) {
+ buf->file_pos = buf->file_last;
+ }
+
+ buf->pos = buf->last;
+ buf->sync = 1;
+
+ ctx->offset = last;
+ continue;
}
- hcl->buf = b;
+ if (range->start >= start) {
+ if (ngx_http_range_link_boundary_header(r, ctx, ll) != NGX_OK) {
+ return NGX_ERROR;
+ }
+
+ ll = &(*ll)->next->next;
- /* "SSSS-EEEE/TTTT" CRLF CRLF */
+ if (buf->in_file) {
+ buf->file_pos += range->start - start;
+ }
- b = ngx_calloc_buf(r->pool);
- if (b == NULL) {
- return NGX_ERROR;
+ if (ngx_buf_in_memory(buf)) {
+ buf->pos += (size_t) (range->start - start);
+ }
+
+ start = range->start;
}
- b->temporary = 1;
- b->pos = range[i].content_range.data;
- b->last = range[i].content_range.data + range[i].content_range.len;
+ ctx->offset = last;
+
+ if (range->end <= last) {
+
+ if (range + 1 < tail && range[1].start < last) {
+
+ ctx->offset = range->end;
+
+ b = ngx_alloc_buf(r->pool);
+ if (b == NULL) {
+ return NGX_ERROR;
+ }
- rcl = ngx_alloc_chain_link(r->pool);
- if (rcl == NULL) {
- return NGX_ERROR;
- }
+ tl = ngx_alloc_chain_link(r->pool);
+ if (tl == NULL) {
+ return NGX_ERROR;
+ }
+
+ tl->buf = b;
+ tl->next = cl;
+
+ ngx_memcpy(b, buf, sizeof(ngx_buf_t));
+ b->last_in_chain = 0;
+ b->last_buf = 0;
+
+ if (buf->in_file) {
+ buf->file_pos += range->end - start;
+ }
- rcl->buf = b;
+ if (ngx_buf_in_memory(buf)) {
+ buf->pos += (size_t) (range->end - start);
+ }
+ cl = tl;
+ buf = cl->buf;
+ }
+
+ if (buf->in_file) {
+ buf->file_last -= last - range->end;
+ }
- /* the range data */
+ if (ngx_buf_in_memory(buf)) {
+ buf->last -= (size_t) (last - range->end);
+ }
+
+ ctx->index++;
+ range++;
- b = ngx_calloc_buf(r->pool);
- if (b == NULL) {
- return NGX_ERROR;
+ if (range == tail) {
+ *ll = cl;
+ ll = &cl->next;
+
+ if (ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) {
+ return NGX_ERROR;
+ }
+
+ break;
+ }
}
- b->in_file = buf->in_file;
- b->temporary = buf->temporary;
- b->memory = buf->memory;
- b->mmap = buf->mmap;
- b->file = buf->file;
+ *ll = cl;
+ ll = &cl->next;
+ }
+
+ if (out == NULL) {
+ return NGX_OK;
+ }
+
+ return ngx_http_next_body_filter(r, out);
+}
+
- if (buf->in_file) {
- b->file_pos = buf->file_pos + range[i].start;
- b->file_last = buf->file_pos + range[i].end;
- }
+static ngx_int_t
+ngx_http_range_link_boundary_header(ngx_http_request_t *r,
+ ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll)
+{
+ ngx_buf_t *b;
+ ngx_chain_t *hcl, *rcl;
+ ngx_http_range_t *range;
+
+ /*
+ * The boundary header of the range:
+ * CRLF
+ * "--0123456789" CRLF
+ * "Content-Type: image/jpeg" CRLF
+ * "Content-Range: bytes "
+ */
+
+ b = ngx_calloc_buf(r->pool);
+ if (b == NULL) {
+ return NGX_ERROR;
+ }
+
+ b->memory = 1;
+ b->pos = ctx->boundary_header.data;
+ b->last = ctx->boundary_header.data + ctx->boundary_header.len;
- if (ngx_buf_in_memory(buf)) {
- b->pos = buf->pos + (size_t) range[i].start;
- b->last = buf->pos + (size_t) range[i].end;
- }
+ hcl = ngx_alloc_chain_link(r->pool);
+ if (hcl == NULL) {
+ return NGX_ERROR;
+ }
+
+ hcl->buf = b;
+
+
+ /* "SSSS-EEEE/TTTT" CRLF CRLF */
+
+ b = ngx_calloc_buf(r->pool);
+ if (b == NULL) {
+ return NGX_ERROR;
+ }
+
+ range = ctx->ranges.elts;
+ b->temporary = 1;
+ b->pos = range[ctx->index].content_range.data;
+ b->last = range[ctx->index].content_range.data
+ + range[ctx->index].content_range.len;
- dcl = ngx_alloc_chain_link(r->pool);
- if (dcl == NULL) {
- return NGX_ERROR;
- }
+ rcl = ngx_alloc_chain_link(r->pool);
+ if (rcl == NULL) {
+ return NGX_ERROR;
+ }
+
+ rcl->buf = b;
+
+ rcl->next = NULL;
+ hcl->next = rcl;
+ *ll = hcl;
- dcl->buf = b;
+ return NGX_OK;
+}
+
- *ll = hcl;
- hcl->next = rcl;
- rcl->next = dcl;
- ll = &dcl->next;
- }
+static ngx_int_t
+ngx_http_range_link_last_boundary(ngx_http_request_t *r,
+ ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll)
+{
+ ngx_buf_t *b;
+ ngx_chain_t *hcl;
/* the last boundary CRLF "--0123456789--" CRLF */
@@ -885,7 +960,8 @@ ngx_http_range_multipart_body(ngx_http_r
}
b->temporary = 1;
- b->last_buf = 1;
+ b->last_in_chain = 1;
+ b->last_buf = (r == r->main) ? 1 : 0;
b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
+ sizeof("--" CRLF) - 1);
@@ -904,11 +980,11 @@ ngx_http_range_multipart_body(ngx_http_r
}
hcl->buf = b;
+
hcl->next = NULL;
-
*ll = hcl;
- return ngx_http_next_body_filter(r, out);
+ return NGX_OK;
}
------------------ Original ------------------
From: "Maxim Dounin";<mdounin@mdounin.ru>;
Send time: Wednesday, Nov 15, 2017 0:57 AM
To: "nginx-devel"<nginx-devel@nginx.org>;
Subject: Re: [patch-1] Range filter: support multiple ranges.
Hello!
On Fri, Nov 10, 2017 at 07:03:01PM +0800, 胡聪 (hucc) wrote:
> Hi,
>
> How about this as the first patch?
>
> # HG changeset patch
> # User hucongcong <hucong.c@foxmail.com>
> # Date 1510309868 -28800
> # Fri Nov 10 18:31:08 2017 +0800
> # Node ID c32fddd15a26b00f8f293f6b0d8762cd9f2bfbdb
> # Parent 32f83fe5747b55ef341595b18069bee3891874d0
> Range filter: support for multipart response in wider range.
>
> Before the patch multipart ranges are supported only if whole body
> is in a single buffer. Now, the limit is canceled. If there are no
> overlapping ranges and all ranges list in ascending order, nginx
> will return 206 with multipart response, otherwise return 200 (OK).
Introducing support for multipart ranges if the response body is
not in the single buffer as long as requested ranges do not
overlap and properly ordered looks like a much better idea to me.
That's basically what I have in mind as possible futher
enhancement of the range filter if we'll ever need better support
for multipart ranges.
There are various questions about the patch itself though, see
below.
> diff -r 32f83fe5747b -r c32fddd15a26 src/http/modules/ngx_http_range_filter_module.c
> --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800
> +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 18:31:08 2017 +0800
> @@ -54,6 +54,7 @@ typedef struct {
>
> typedef struct {
> off_t offset;
> + ngx_uint_t index; /* start with 1 */
> ngx_str_t boundary_header;
> ngx_array_t ranges;
> } ngx_http_range_filter_ctx_t;
> @@ -66,12 +67,14 @@ static ngx_int_t ngx_http_range_singlepa
> static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx);
> static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r);
> -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r,
> - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll);
> +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll);
>
> static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf);
> static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf);
> @@ -270,9 +273,8 @@ ngx_http_range_parse(ngx_http_request_t
> ngx_uint_t ranges)
> {
> u_char *p;
> - off_t start, end, size, content_length, cutoff,
> - cutlim;
> - ngx_uint_t suffix;
> + off_t start, end, content_length, cutoff, cutlim;
> + ngx_uint_t suffix, descending;
> ngx_http_range_t *range;
> ngx_http_range_filter_ctx_t *mctx;
>
> @@ -281,6 +283,7 @@ ngx_http_range_parse(ngx_http_request_t
> ngx_http_range_body_filter_module);
> if (mctx) {
> ctx->ranges = mctx->ranges;
> + ctx->boundary_header = ctx->boundary_header;
> return NGX_OK;
> }
> }
> @@ -292,7 +295,8 @@ ngx_http_range_parse(ngx_http_request_t
> }
>
> p = r->headers_in.range->value.data + 6;
> - size = 0;
> + range = NULL;
> + descending = 0;
> content_length = r->headers_out.content_length_n;
>
> cutoff = NGX_MAX_OFF_T_VALUE / 10;
> @@ -369,6 +373,11 @@ ngx_http_range_parse(ngx_http_request_t
> found:
>
> if (start < end) {
> +
> + if (range && start < range->end) {
> + descending++;
> + }
> +
> range = ngx_array_push(&ctx->ranges);
> if (range == NULL) {
> return NGX_ERROR;
> @@ -377,16 +386,6 @@ ngx_http_range_parse(ngx_http_request_t
> range->start = start;
> range->end = end;
>
> - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) {
> - return NGX_HTTP_RANGE_NOT_SATISFIABLE;
> - }
> -
> - size += end - start;
> -
> - if (ranges-- == 0) {
> - return NGX_DECLINED;
> - }
> -
> } else if (start == 0) {
> return NGX_DECLINED;
> }
> @@ -400,7 +399,7 @@ ngx_http_range_parse(ngx_http_request_t
> return NGX_HTTP_RANGE_NOT_SATISFIABLE;
> }
>
> - if (size > content_length) {
> + if (ctx->ranges.nelts > ranges || descending) {
> return NGX_DECLINED;
> }
This change basically disables support for non-ascending ranges.
As previously suggested, this will break various legitimate use
cases, and certainly this is not something we should do.
>
> @@ -469,6 +468,10 @@ ngx_http_range_multipart_header(ngx_http
> ngx_http_range_t *range;
> ngx_atomic_uint_t boundary;
>
> + if (r != r->main) {
> + return ngx_http_next_header_filter(r);
> + }
> +
> size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
> + sizeof(CRLF "Content-Type: ") - 1
> + r->headers_out.content_type.len
> @@ -570,10 +573,11 @@ ngx_http_range_multipart_header(ngx_http
> - range[i].content_range.data;
>
> len += ctx->boundary_header.len + range[i].content_range.len
> - + (range[i].end - range[i].start);
> + + (range[i].end - range[i].start);
This looks like an unrelated whitespace change.
> }
>
> r->headers_out.content_length_n = len;
> + r->headers_out.content_offset = range[0].start;
>
> if (r->headers_out.content_length) {
> r->headers_out.content_length->hash = 0;
> @@ -639,63 +643,15 @@ ngx_http_range_body_filter(ngx_http_requ
> return ngx_http_range_singlepart_body(r, ctx, in);
> }
>
> - /*
> - * multipart ranges are supported only if whole body is in a single buffer
> - */
> -
> if (ngx_buf_special(in->buf)) {
> return ngx_http_next_body_filter(r, in);
> }
The ngx_buf_special() check should not be needed here as long as
ngx_http_range_multipart_body() is modified to properly support
multiple buffers.
>
> - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) {
> - return NGX_ERROR;
> - }
> -
> return ngx_http_range_multipart_body(r, ctx, in);
> }
>
>
> static ngx_int_t
> -ngx_http_range_test_overlapped(ngx_http_request_t *r,
> - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> -{
> - off_t start, last;
> - ngx_buf_t *buf;
> - ngx_uint_t i;
> - ngx_http_range_t *range;
> -
> - if (ctx->offset) {
> - goto overlapped;
> - }
> -
> - buf = in->buf;
> -
> - if (!buf->last_buf) {
> - start = ctx->offset;
> - last = ctx->offset + ngx_buf_size(buf);
> -
> - range = ctx->ranges.elts;
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> - if (start > range[i].start || last < range[i].end) {
> - goto overlapped;
> - }
> - }
> - }
> -
> - ctx->offset = ngx_buf_size(buf);
> -
> - return NGX_OK;
> -
> -overlapped:
> -
> - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
> - "range in overlapped buffers");
> -
> - return NGX_ERROR;
> -}
> -
> -
> -static ngx_int_t
> ngx_http_range_singlepart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> {
> @@ -786,96 +742,227 @@ static ngx_int_t
> ngx_http_range_multipart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> {
> - ngx_buf_t *b, *buf;
> - ngx_uint_t i;
> - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll;
> - ngx_http_range_t *range;
> + off_t start, last, back;
> + ngx_buf_t *buf, *b;
> + ngx_uint_t i, finished;
> + ngx_chain_t *out, *cl, *ncl, **ll;
> + ngx_http_range_t *range, *tail;
>
> - ll = &out;
> - buf = in->buf;
> range = ctx->ranges.elts;
>
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> + if (!ctx->index) {
> + for (i = 0; i < ctx->ranges.nelts; i++) {
> + if (ctx->offset < range[i].end) {
> + ctx->index = i + 1;
> + break;
> + }
> + }
> + }
All this logic with using ctx->index as range index plus 1 looks
counter-intuitive and unneeded. A much better options would be
(in no particular order):
- use a special value to mean "uninitialized", like -1;
- always initialize ctx->index to 0 and move it futher to the next
range once we see that ctx->offset is larger than range[i].end;
- do proper initialization somewhere in
ngx_http_range_header_filter() or ngx_http_range_multipart_header().
> +
> + tail = range + ctx->ranges.nelts - 1;
> + range += ctx->index - 1;
> +
> + out = NULL;
> + ll = &out;
> + finished = 0;
>
> - /*
> - * The boundary header of the range:
> - * CRLF
> - * "--0123456789" CRLF
> - * "Content-Type: image/jpeg" CRLF
> - * "Content-Range: bytes "
> - */
> + for (cl = in; cl; cl = cl->next) {
> +
> + buf = cl->buf;
> +
> + start = ctx->offset;
> + last = ctx->offset + ngx_buf_size(buf);
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + ctx->offset = last;
> +
> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body buf: %O-%O", start, last);
> +
> + if (ngx_buf_special(buf)) {
> + *ll = cl;
> + ll = &cl->next;
> + continue;
> }
>
> - b->memory = 1;
> - b->pos = ctx->boundary_header.data;
> - b->last = ctx->boundary_header.data + ctx->boundary_header.len;
> + if (range->end <= start || range->start >= last) {
> +
> + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body skip");
>
> - hcl = ngx_alloc_chain_link(r->pool);
> - if (hcl == NULL) {
> - return NGX_ERROR;
> + if (buf->in_file) {
> + buf->file_pos = buf->file_last;
> + }
> +
> + buf->pos = buf->last;
> + buf->sync = 1;
> +
> + continue;
Looking at this code I tend to think that our existing
ngx_http_range_singlepart_body() implementation you've used as a
reference is incorrect. It removes buffers from the original
chain as passed to the filter - this can result in a buffer being
lost from tracking by the module who owns the buffer, and a
request hang if/when all available buffers will be lost. Instead,
it should either preserve all existing chain links, or create a
new chain. I'll take a look how to properly fix this.
> }
>
> - hcl->buf = b;
> + if (range->start >= start) {
>
> + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) {
> + return NGX_ERROR;
> + }
>
> - /* "SSSS-EEEE/TTTT" CRLF CRLF */
> + if (buf->in_file) {
> + buf->file_pos += range->start - start;
> + }
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + if (ngx_buf_in_memory(buf)) {
> + buf->pos += (size_t) (range->start - start);
> + }
> }
>
> - b->temporary = 1;
> - b->pos = range[i].content_range.data;
> - b->last = range[i].content_range.data + range[i].content_range.len;
> + if (range->end <= last) {
> +
> + if (range < tail && range[1].start < last) {
The "tail" name is not immediately obvious, and it might be better
idea to name it differently. Also, range[1] looks strange when we
are using range as a pointer and not array. Hopefully this test
will be unneeded when code will be cleaned up to avoid moving
ctx->offset backwards, see below.
> +
> + b = ngx_alloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
> +
> + ncl = ngx_alloc_chain_link(r->pool);
> + if (ncl == NULL) {
> + return NGX_ERROR;
> + }
Note: usual names for temporary chain links are "ln" and "tl".
>
> - rcl = ngx_alloc_chain_link(r->pool);
> - if (rcl == NULL) {
> - return NGX_ERROR;
> - }
> + ncl->buf = b;
> + ncl->next = cl;
> +
> + ngx_memcpy(b, buf, sizeof(ngx_buf_t));
> + b->last_in_chain = 0;
> + b->last_buf = 0;
> +
> + back = last - range->end;
> + ctx->offset -= back;
This looks like a hack, there should be no need to adjust
ctx->offset backwards. Instead, we should move ctx->offset only
when we've done with a buffer.
> +
> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body reuse buf: %O-%O",
> + ctx->offset, ctx->offset + back);
>
> - rcl->buf = b;
> + if (buf->in_file) {
> + buf->file_pos = buf->file_last - back;
> + }
> +
> + if (ngx_buf_in_memory(buf)) {
> + buf->pos = buf->last - back;
> + }
>
> + cl = ncl;
> + buf = cl->buf;
> + }
> +
> + if (buf->in_file) {
> + buf->file_last -= last - range->end;
> + }
>
> - /* the range data */
> + if (ngx_buf_in_memory(buf)) {
> + buf->last -= (size_t) (last - range->end);
> + }
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + if (range == tail) {
> + buf->last_buf = (r == r->main) ? 1 : 0;
> + buf->last_in_chain = 1;
> + *ll = cl;
> + ll = &cl->next;
> +
> + finished = 1;
It is not clear why to use the "finished" flag instead of adding
the boundary here.
> + break;
> + }
> +
> + range++;
> + ctx->index++;
> }
>
> - b->in_file = buf->in_file;
> - b->temporary = buf->temporary;
> - b->memory = buf->memory;
> - b->mmap = buf->mmap;
> - b->file = buf->file;
> + *ll = cl;
> + ll = &cl->next;
> + }
> +
> + if (out == NULL) {
> + return NGX_OK;
> + }
> +
> + *ll = NULL;
> +
> + if (finished
> + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK)
> + {
> + return NGX_ERROR;
> + }
> +
> + return ngx_http_next_body_filter(r, out);
> +}
> +
>
> - if (buf->in_file) {
> - b->file_pos = buf->file_pos + range[i].start;
> - b->file_last = buf->file_pos + range[i].end;
> - }
> +static ngx_int_t
> +ngx_http_range_link_boundary_header(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll)
The "ngx_chain_t ***lll" argument suggests it might be a good idea
to somehow improve the interface.
> +{
> + ngx_buf_t *b;
> + ngx_chain_t *hcl, *rcl;
> + ngx_http_range_t *range;
> +
> + /*
> + * The boundary header of the range:
> + * CRLF
> + * "--0123456789" CRLF
> + * "Content-Type: image/jpeg" CRLF
> + * "Content-Range: bytes "
> + */
> +
> + b = ngx_calloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
>
> - if (ngx_buf_in_memory(buf)) {
> - b->pos = buf->pos + (size_t) range[i].start;
> - b->last = buf->pos + (size_t) range[i].end;
> - }
> + b->memory = 1;
> + b->pos = ctx->boundary_header.data;
> + b->last = ctx->boundary_header.data + ctx->boundary_header.len;
> +
> + hcl = ngx_alloc_chain_link(r->pool);
> + if (hcl == NULL) {
> + return NGX_ERROR;
> + }
> +
> + hcl->buf = b;
> +
> +
> + /* "SSSS-EEEE/TTTT" CRLF CRLF */
> +
> + b = ngx_calloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
>
> - dcl = ngx_alloc_chain_link(r->pool);
> - if (dcl == NULL) {
> - return NGX_ERROR;
> - }
> + range = ctx->ranges.elts;
> + b->temporary = 1;
> + b->pos = range[ctx->index - 1].content_range.data;
> + b->last = range[ctx->index - 1].content_range.data
> + + range[ctx->index - 1].content_range.len;
> +
> + rcl = ngx_alloc_chain_link(r->pool);
> + if (rcl == NULL) {
> + return NGX_ERROR;
> + }
> +
> + rcl->buf = b;
>
> - dcl->buf = b;
> + **lll = hcl;
> + hcl->next = rcl;
> + *lll = &rcl->next;
> +
> + return NGX_OK;
> +}
>
> - *ll = hcl;
> - hcl->next = rcl;
> - rcl->next = dcl;
> - ll = &dcl->next;
> - }
> +
> +static ngx_int_t
> +ngx_http_range_link_last_boundary(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll)
> +{
> + ngx_buf_t *b;
> + ngx_chain_t *hcl;
>
> /* the last boundary CRLF "--0123456789--" CRLF */
>
> @@ -885,7 +972,8 @@ ngx_http_range_multipart_body(ngx_http_r
> }
>
> b->temporary = 1;
> - b->last_buf = 1;
> + b->last_in_chain = 1;
> + b->last_buf = (r == r->main) ? 1 : 0;
>
> b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
> + sizeof("--" CRLF) - 1);
> @@ -908,7 +996,7 @@ ngx_http_range_multipart_body(ngx_http_r
>
> *ll = hcl;
>
> - return ngx_http_next_body_filter(r, out);
> + return NGX_OK;
> }
>
>
> ------------------ Original ------------------
> From: "胡聪 (hucc)";<hucong.c@foxmail.com>;
> Send time: Friday, Nov 10, 2017 4:41 AM
> To: "nginx-devel"<nginx-devel@nginx.org>;
> Subject: Re: [patch-1] Range filter: support multiple ranges.
>
> Hi,
>
> Please ignore the previous reply. The updated patch is placed at the end.
>
> On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote:
>
> >On Fri, Oct 27, 2017 at 06:50:32PM +0800, 胡聪 (hucc) wrote:
> >
> >> # HG changeset patch
> >> # User hucongcong <hucong.c@foxmail.com>
> >> # Date 1509099940 -28800
> >> # Fri Oct 27 18:25:40 2017 +0800
> >> # Node ID 62c100a0d42614cd46f0719c0acb0ad914594217
> >> # Parent b9850d3deb277bd433a689712c40a84401443520
> >> Range filter: support multiple ranges.
> >
> >This summary line is at least misleading.
>
> Ok, maybe the summary line is support multiple ranges when body is
> in multiple buffers.
>
> >> When multiple ranges are requested, nginx will coalesce any of the ranges
> >> that overlap, or that are separated by a gap that is smaller than the
> >> NGX_HTTP_RANGE_MULTIPART_GAP macro.
> >
> >(Note that the patch also does reordering of ranges. For some
> >reason this is not mentioned in the commit log. There are also
> >other changes not mentioned in the commit log - for example, I see
> >ngx_http_range_t was moved to ngx_http_request.h. These are
> >probably do not belong to the patch at all.)
>
> I actually wait for you to give better advice. I tried my best to
> make the changes easier and more readable and I will split it into
> multiple patches based on your suggestions if these changes will be
> accepted.
>
> >Reordering and/or coalescing ranges is not something that clients
> >usually expect to happen. This was widely discussed at the time
> >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233
> >introduced the "MAY coalesce" clause. But this doesn't make
> >clients, especially old ones, magically prepared for this.
>
> I did not know the CVE-2011-3192. If multiple ranges list in
> ascending order and there are no overlapping ranges, the code will
> be much simpler. This is what I think.
>
> >Moreover, this will certainly break some use cases like "request
> >some metadata first, and then rest of the file". So this is
> >certainly not a good idea to always reorder / coalesce ranges
> >unless this is really needed for some reason. (Or even at all,
> >as just returning 200 might be much more compatible with various
> >clients, as outlined above.)
> >
> >It is also not clear what you are trying to achieve with this
> >patch. You may want to elaborate more on what problem you are
> >trying to solve, may be there are better solutions.
>
> I am trying to support multiple ranges when proxy_buffering is off
> and sometimes slice is enabled. The data is always cached in the
> backend which is not nginx. As far as I know, similar architecture
> is widely used in CDN. So the implementation of multiple ranges in
> the architecture I mentioned above is required and inevitable.
> Besides, P2P clients desire for this feature to gather data-pieces.
> Hope I already made it clear.
>
> All these changes have been tested. Hope it helps! Temporarily,
> the changes are as follows:
>
> diff -r 32f83fe5747b src/http/modules/ngx_http_range_filter_module.c
> --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800
> +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 04:31:52 2017 +0800
> @@ -46,16 +46,10 @@
>
>
> typedef struct {
> - off_t start;
> - off_t end;
> - ngx_str_t content_range;
> -} ngx_http_range_t;
> + off_t offset;
> + ngx_uint_t index; /* start with 1 */
>
> -
> -typedef struct {
> - off_t offset;
> - ngx_str_t boundary_header;
> - ngx_array_t ranges;
> + ngx_str_t boundary_header;
> } ngx_http_range_filter_ctx_t;
>
>
> @@ -66,12 +60,14 @@ static ngx_int_t ngx_http_range_singlepa
> static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx);
> static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r);
> -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r,
> - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll);
> +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll);
>
> static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf);
> static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf);
> @@ -234,7 +230,7 @@ parse:
> r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT;
> r->headers_out.status_line.len = 0;
>
> - if (ctx->ranges.nelts == 1) {
> + if (r->headers_out.ranges->nelts == 1) {
> return ngx_http_range_singlepart_header(r, ctx);
> }
>
> @@ -270,9 +266,9 @@ ngx_http_range_parse(ngx_http_request_t
> ngx_uint_t ranges)
> {
> u_char *p;
> - off_t start, end, size, content_length, cutoff,
> - cutlim;
> - ngx_uint_t suffix;
> + off_t start, end, content_length,
> + cutoff, cutlim;
> + ngx_uint_t suffix, descending;
> ngx_http_range_t *range;
> ngx_http_range_filter_ctx_t *mctx;
>
> @@ -280,19 +276,21 @@ ngx_http_range_parse(ngx_http_request_t
> mctx = ngx_http_get_module_ctx(r->main,
> ngx_http_range_body_filter_module);
> if (mctx) {
> - ctx->ranges = mctx->ranges;
> + r->headers_out.ranges = r->main->headers_out.ranges;
> + ctx->boundary_header = mctx->boundary_header;
> return NGX_OK;
> }
> }
>
> - if (ngx_array_init(&ctx->ranges, r->pool, 1, sizeof(ngx_http_range_t))
> - != NGX_OK)
> - {
> + r->headers_out.ranges = ngx_array_create(r->pool, 1,
> + sizeof(ngx_http_range_t));
> + if (r->headers_out.ranges == NULL) {
> return NGX_ERROR;
> }
>
> p = r->headers_in.range->value.data + 6;
> - size = 0;
> + range = NULL;
> + descending = 0;
> content_length = r->headers_out.content_length_n;
>
> cutoff = NGX_MAX_OFF_T_VALUE / 10;
> @@ -369,7 +367,12 @@ ngx_http_range_parse(ngx_http_request_t
> found:
>
> if (start < end) {
> - range = ngx_array_push(&ctx->ranges);
> +
> + if (range && start < range->end) {
> + descending++;
> + }
> +
> + range = ngx_array_push(r->headers_out.ranges);
> if (range == NULL) {
> return NGX_ERROR;
> }
> @@ -377,16 +380,6 @@ ngx_http_range_parse(ngx_http_request_t
> range->start = start;
> range->end = end;
>
> - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) {
> - return NGX_HTTP_RANGE_NOT_SATISFIABLE;
> - }
> -
> - size += end - start;
> -
> - if (ranges-- == 0) {
> - return NGX_DECLINED;
> - }
> -
> } else if (start == 0) {
> return NGX_DECLINED;
> }
> @@ -396,11 +389,15 @@ ngx_http_range_parse(ngx_http_request_t
> }
> }
>
> - if (ctx->ranges.nelts == 0) {
> + if (r->headers_out.ranges->nelts == 0) {
> return NGX_HTTP_RANGE_NOT_SATISFIABLE;
> }
>
> - if (size > content_length) {
> + if (r->headers_out.ranges->nelts > ranges) {
> + r->headers_out.ranges->nelts = ranges;
> + }
> +
> + if (descending) {
> return NGX_DECLINED;
> }
>
> @@ -439,7 +436,7 @@ ngx_http_range_singlepart_header(ngx_htt
>
> /* "Content-Range: bytes SSSS-EEEE/TTTT" header */
>
> - range = ctx->ranges.elts;
> + range = r->headers_out.ranges->elts;
>
> content_range->value.len = ngx_sprintf(content_range->value.data,
> "bytes %O-%O/%O",
> @@ -469,6 +466,10 @@ ngx_http_range_multipart_header(ngx_http
> ngx_http_range_t *range;
> ngx_atomic_uint_t boundary;
>
> + if (r != r->main) {
> + return ngx_http_next_header_filter(r);
> + }
> +
> size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
> + sizeof(CRLF "Content-Type: ") - 1
> + r->headers_out.content_type.len
> @@ -551,8 +552,8 @@ ngx_http_range_multipart_header(ngx_http
>
> len = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1;
>
> - range = ctx->ranges.elts;
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> + range = r->headers_out.ranges->elts;
> + for (i = 0; i < r->headers_out.ranges->nelts; i++) {
>
> /* the size of the range: "SSSS-EEEE/TTTT" CRLF CRLF */
>
> @@ -570,10 +571,11 @@ ngx_http_range_multipart_header(ngx_http
> - range[i].content_range.data;
>
> len += ctx->boundary_header.len + range[i].content_range.len
> - + (range[i].end - range[i].start);
> + + (range[i].end - range[i].start);
> }
>
> r->headers_out.content_length_n = len;
> + r->headers_out.content_offset = range[0].start;
>
> if (r->headers_out.content_length) {
> r->headers_out.content_length->hash = 0;
> @@ -635,67 +637,19 @@ ngx_http_range_body_filter(ngx_http_requ
> return ngx_http_next_body_filter(r, in);
> }
>
> - if (ctx->ranges.nelts == 1) {
> + if (r->headers_out.ranges->nelts == 1) {
> return ngx_http_range_singlepart_body(r, ctx, in);
> }
>
> - /*
> - * multipart ranges are supported only if whole body is in a single buffer
> - */
> -
> if (ngx_buf_special(in->buf)) {
> return ngx_http_next_body_filter(r, in);
> }
>
> - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) {
> - return NGX_ERROR;
> - }
> -
> return ngx_http_range_multipart_body(r, ctx, in);
> }
>
>
> static ngx_int_t
> -ngx_http_range_test_overlapped(ngx_http_request_t *r,
> - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> -{
> - off_t start, last;
> - ngx_buf_t *buf;
> - ngx_uint_t i;
> - ngx_http_range_t *range;
> -
> - if (ctx->offset) {
> - goto overlapped;
> - }
> -
> - buf = in->buf;
> -
> - if (!buf->last_buf) {
> - start = ctx->offset;
> - last = ctx->offset + ngx_buf_size(buf);
> -
> - range = ctx->ranges.elts;
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> - if (start > range[i].start || last < range[i].end) {
> - goto overlapped;
> - }
> - }
> - }
> -
> - ctx->offset = ngx_buf_size(buf);
> -
> - return NGX_OK;
> -
> -overlapped:
> -
> - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
> - "range in overlapped buffers");
> -
> - return NGX_ERROR;
> -}
> -
> -
> -static ngx_int_t
> ngx_http_range_singlepart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> {
> @@ -706,7 +660,7 @@ ngx_http_range_singlepart_body(ngx_http_
>
> out = NULL;
> ll = &out;
> - range = ctx->ranges.elts;
> + range = r->headers_out.ranges->elts;
>
> for (cl = in; cl; cl = cl->next) {
>
> @@ -786,96 +740,227 @@ static ngx_int_t
> ngx_http_range_multipart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> {
> - ngx_buf_t *b, *buf;
> - ngx_uint_t i;
> - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll;
> - ngx_http_range_t *range;
> + off_t start, last, back;
> + ngx_buf_t *buf, *b;
> + ngx_uint_t i, finished;
> + ngx_chain_t *out, *cl, *ncl, **ll;
> + ngx_http_range_t *range, *tail;
> +
> + range = r->headers_out.ranges->elts;
>
> - ll = &out;
> - buf = in->buf;
> - range = ctx->ranges.elts;
> + if (!ctx->index) {
> + for (i = 0; i < r->headers_out.ranges->nelts; i++) {
> + if (ctx->offset < range[i].end) {
> + ctx->index = i + 1;
> + break;
> + }
> + }
> + }
>
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> + tail = range + r->headers_out.ranges->nelts - 1;
> + range += ctx->index - 1;
>
> - /*
> - * The boundary header of the range:
> - * CRLF
> - * "--0123456789" CRLF
> - * "Content-Type: image/jpeg" CRLF
> - * "Content-Range: bytes "
> - */
> + out = NULL;
> + ll = &out;
> + finished = 0;
> +
> + for (cl = in; cl; cl = cl->next) {
> +
> + buf = cl->buf;
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + start = ctx->offset;
> + last = ctx->offset + ngx_buf_size(buf);
> +
> + ctx->offset = last;
> +
> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body buf: %O-%O", start, last);
> +
> + if (ngx_buf_special(buf)) {
> + *ll = cl;
> + ll = &cl->next;
> + continue;
> }
>
> - b->memory = 1;
> - b->pos = ctx->boundary_header.data;
> - b->last = ctx->boundary_header.data + ctx->boundary_header.len;
> + if (range->end <= start || range->start >= last) {
> +
> + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body skip");
>
> - hcl = ngx_alloc_chain_link(r->pool);
> - if (hcl == NULL) {
> - return NGX_ERROR;
> + if (buf->in_file) {
> + buf->file_pos = buf->file_last;
> + }
> +
> + buf->pos = buf->last;
> + buf->sync = 1;
> +
> + continue;
> }
>
> - hcl->buf = b;
> + if (range->start >= start) {
>
> + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) {
> + return NGX_ERROR;
> + }
>
> - /* "SSSS-EEEE/TTTT" CRLF CRLF */
> + if (buf->in_file) {
> + buf->file_pos += range->start - start;
> + }
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + if (ngx_buf_in_memory(buf)) {
> + buf->pos += (size_t) (range->start - start);
> + }
> }
>
> - b->temporary = 1;
> - b->pos = range[i].content_range.data;
> - b->last = range[i].content_range.data + range[i].content_range.len;
> + if (range->end <= last) {
> +
> + if (range < tail && range[1].start < last) {
> +
> + b = ngx_alloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
> +
> + ncl = ngx_alloc_chain_link(r->pool);
> + if (ncl == NULL) {
> + return NGX_ERROR;
> + }
>
> - rcl = ngx_alloc_chain_link(r->pool);
> - if (rcl == NULL) {
> - return NGX_ERROR;
> - }
> + ncl->buf = b;
> + ncl->next = cl;
> +
> + ngx_memcpy(b, buf, sizeof(ngx_buf_t));
> + b->last_in_chain = 0;
> + b->last_buf = 0;
> +
> + back = last - range->end;
> + ctx->offset -= back;
> +
> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body reuse buf: %O-%O",
> + ctx->offset, ctx->offset + back);
>
> - rcl->buf = b;
> + if (buf->in_file) {
> + buf->file_pos = buf->file_last - back;
> + }
> +
> + if (ngx_buf_in_memory(buf)) {
> + buf->pos = buf->last - back;
> + }
>
> + cl = ncl;
> + buf = cl->buf;
> + }
> +
> + if (buf->in_file) {
> + buf->file_last -= last - range->end;
> + }
>
> - /* the range data */
> + if (ngx_buf_in_memory(buf)) {
> + buf->last -= (size_t) (last - range->end);
> + }
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + if (range == tail) {
> + buf->last_buf = (r == r->main) ? 1 : 0;
> + buf->last_in_chain = 1;
> + *ll = cl;
> + ll = &cl->next;
> +
> + finished = 1;
> + break;
> + }
> +
> + range++;
> + ctx->index++;
> }
>
> - b->in_file = buf->in_file;
> - b->temporary = buf->temporary;
> - b->memory = buf->memory;
> - b->mmap = buf->mmap;
> - b->file = buf->file;
> + *ll = cl;
> + ll = &cl->next;
> + }
> +
> + if (out == NULL) {
> + return NGX_OK;
> + }
> +
> + *ll = NULL;
> +
> + if (finished
> + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK)
> + {
> + return NGX_ERROR;
> + }
> +
> + return ngx_http_next_body_filter(r, out);
> +}
> +
>
> - if (buf->in_file) {
> - b->file_pos = buf->file_pos + range[i].start;
> - b->file_last = buf->file_pos + range[i].end;
> - }
> +static ngx_int_t
> +ngx_http_range_link_boundary_header(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll)
> +{
> + ngx_buf_t *b;
> + ngx_chain_t *hcl, *rcl;
> + ngx_http_range_t *range;
> +
> + /*
> + * The boundary header of the range:
> + * CRLF
> + * "--0123456789" CRLF
> + * "Content-Type: image/jpeg" CRLF
> + * "Content-Range: bytes "
> + */
> +
> + b = ngx_calloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
>
> - if (ngx_buf_in_memory(buf)) {
> - b->pos = buf->pos + (size_t) range[i].start;
> - b->last = buf->pos + (size_t) range[i].end;
> - }
> + b->memory = 1;
> + b->pos = ctx->boundary_header.data;
> + b->last = ctx->boundary_header.data + ctx->boundary_header.len;
> +
> + hcl = ngx_alloc_chain_link(r->pool);
> + if (hcl == NULL) {
> + return NGX_ERROR;
> + }
> +
> + hcl->buf = b;
> +
> +
> + /* "SSSS-EEEE/TTTT" CRLF CRLF */
> +
> + b = ngx_calloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
>
> - dcl = ngx_alloc_chain_link(r->pool);
> - if (dcl == NULL) {
> - return NGX_ERROR;
> - }
> + range = r->headers_out.ranges->elts;
> + b->temporary = 1;
> + b->pos = range[ctx->index - 1].content_range.data;
> + b->last = range[ctx->index - 1].content_range.data
> + + range[ctx->index - 1].content_range.len;
> +
> + rcl = ngx_alloc_chain_link(r->pool);
> + if (rcl == NULL) {
> + return NGX_ERROR;
> + }
> +
> + rcl->buf = b;
>
> - dcl->buf = b;
> + **lll = hcl;
> + hcl->next = rcl;
> + *lll = &rcl->next;
> +
> + return NGX_OK;
> +}
>
> - *ll = hcl;
> - hcl->next = rcl;
> - rcl->next = dcl;
> - ll = &dcl->next;
> - }
> +
> +static ngx_int_t
> +ngx_http_range_link_last_boundary(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll)
> +{
> + ngx_buf_t *b;
> + ngx_chain_t *hcl;
>
> /* the last boundary CRLF "--0123456789--" CRLF */
>
> @@ -885,7 +970,8 @@ ngx_http_range_multipart_body(ngx_http_r
> }
>
> b->temporary = 1;
> - b->last_buf = 1;
> + b->last_in_chain = 1;
> + b->last_buf = (r == r->main) ? 1 : 0;
>
> b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
> + sizeof("--" CRLF) - 1);
> @@ -908,7 +994,7 @@ ngx_http_range_multipart_body(ngx_http_r
>
> *ll = hcl;
>
> - return ngx_http_next_body_filter(r, out);
> + return NGX_OK;
> }
>
>
> diff -r 32f83fe5747b src/http/modules/ngx_http_slice_filter_module.c
> --- a/src/http/modules/ngx_http_slice_filter_module.c Fri Oct 27 00:30:38 2017 +0800
> +++ b/src/http/modules/ngx_http_slice_filter_module.c Fri Nov 10 04:31:52 2017 +0800
> @@ -22,6 +22,8 @@ typedef struct {
> ngx_str_t etag;
> unsigned last:1;
> unsigned active:1;
> + unsigned multipart:1;
> + ngx_uint_t index;
> ngx_http_request_t *sr;
> } ngx_http_slice_ctx_t;
>
> @@ -103,7 +105,9 @@ ngx_http_slice_header_filter(ngx_http_re
> {
> off_t end;
> ngx_int_t rc;
> + ngx_uint_t i;
> ngx_table_elt_t *h;
> + ngx_http_range_t *range;
> ngx_http_slice_ctx_t *ctx;
> ngx_http_slice_loc_conf_t *slcf;
> ngx_http_slice_content_range_t cr;
> @@ -182,27 +186,48 @@ ngx_http_slice_header_filter(ngx_http_re
>
> r->allow_ranges = 1;
> r->subrequest_ranges = 1;
> - r->single_range = 1;
>
> rc = ngx_http_next_header_filter(r);
>
> - if (r != r->main) {
> - return rc;
> + if (r == r->main) {
> + r->preserve_body = 1;
> +
> + if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
> + ctx->multipart = (r->headers_out.ranges->nelts != 1);
> + range = r->headers_out.ranges->elts;
> +
> + if (ctx->start + (off_t) slcf->size <= range[0].start) {
> + ctx->start = slcf->size * (range[0].start / slcf->size);
> + }
> +
> + ctx->end = range[r->headers_out.ranges->nelts - 1].end;
> +
> + } else {
> + ctx->end = cr.complete_length;
> + }
> }
>
> - r->preserve_body = 1;
> + if (ctx->multipart) {
> + range = r->headers_out.ranges->elts;
> +
> + for (i = ctx->index; i < r->headers_out.ranges->nelts - 1; i++) {
> +
> + if (ctx->start < range[i].end) {
> + ctx->index = i;
> + break;
> + }
>
> - if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
> - if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) {
> - ctx->start = slcf->size
> - * (r->headers_out.content_offset / slcf->size);
> + if (ctx->start + (off_t) slcf->size <= range[i + 1].start) {
> + i++;
> + ctx->index = i;
> + ctx->start = slcf->size * (range[i].start / slcf->size);
> +
> + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "range multipart so fast forward to %O-%O @%O",
> + range[i].start, range[i].end, ctx->start);
> + break;
> + }
> }
> -
> - ctx->end = r->headers_out.content_offset
> - + r->headers_out.content_length_n;
> -
> - } else {
> - ctx->end = cr.complete_length;
> }
>
> return rc;
> diff -r 32f83fe5747b src/http/ngx_http_request.h
> --- a/src/http/ngx_http_request.h Fri Oct 27 00:30:38 2017 +0800
> +++ b/src/http/ngx_http_request.h Fri Nov 10 04:31:52 2017 +0800
> @@ -251,6 +251,13 @@ typedef struct {
>
>
> typedef struct {
> + off_t start;
> + off_t end;
> + ngx_str_t content_range;
> +} ngx_http_range_t;
> +
> +
> +typedef struct {
> ngx_list_t headers;
> ngx_list_t trailers;
>
> @@ -278,6 +285,7 @@ typedef struct {
> u_char *content_type_lowcase;
> ngx_uint_t content_type_hash;
>
> + ngx_array_t *ranges; /* ngx_http_range_t */
> ngx_array_t cache_control;
>
> off_t content_length_n;
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
After some attempts, I found it is still too hard for me if the
request ranges in no particular order. Looking forward to your code.
Anyway, under your guidance, my changes are as follows:
# HG changeset patch
# User hucongcong <hucong.c@foxmail.com>
# Date 1510309868 -28800
# Fri Nov 10 18:31:08 2017 +0800
# Node ID 5c327973a284849a18c042fa6e7e191268b94bac
# Parent 32f83fe5747b55ef341595b18069bee3891874d0
Range filter: better support for multipart ranges.
Introducing support for multipart ranges if the response body is
not in the single buffer as long as requested ranges do not overlap
and properly ordered.
diff -r 32f83fe5747b -r 5c327973a284 src/http/modules/ngx_http_range_filter_module.c
--- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800
+++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 18:31:08 2017 +0800
@@ -54,6 +54,7 @@ typedef struct {
typedef struct {
off_t offset;
+ ngx_uint_t index;
ngx_str_t boundary_header;
ngx_array_t ranges;
} ngx_http_range_filter_ctx_t;
@@ -66,12 +67,14 @@ static ngx_int_t ngx_http_range_singlepa
static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx);
static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r);
-static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r,
- ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
+static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r,
+ ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll);
+static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r,
+ ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll);
static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf);
static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf);
@@ -222,6 +225,7 @@ parse:
return NGX_ERROR;
}
+ ctx->index = (ngx_uint_t) -1;
ctx->offset = r->headers_out.content_offset;
ranges = r->single_range ? 1 : clcf->max_ranges;
@@ -270,9 +274,8 @@ ngx_http_range_parse(ngx_http_request_t
ngx_uint_t ranges)
{
u_char *p;
- off_t start, end, size, content_length, cutoff,
- cutlim;
- ngx_uint_t suffix;
+ off_t start, end, content_length, cutoff, cutlim;
+ ngx_uint_t suffix, descending;
ngx_http_range_t *range;
ngx_http_range_filter_ctx_t *mctx;
@@ -281,6 +284,7 @@ ngx_http_range_parse(ngx_http_request_t
ngx_http_range_body_filter_module);
if (mctx) {
ctx->ranges = mctx->ranges;
+ ctx->boundary_header = mctx->boundary_header;
return NGX_OK;
}
}
@@ -292,7 +296,8 @@ ngx_http_range_parse(ngx_http_request_t
}
p = r->headers_in.range->value.data + 6;
- size = 0;
+ range = NULL;
+ descending = 0;
content_length = r->headers_out.content_length_n;
cutoff = NGX_MAX_OFF_T_VALUE / 10;
@@ -369,6 +374,11 @@ ngx_http_range_parse(ngx_http_request_t
found:
if (start < end) {
+
+ if (range && start < range->end) {
+ descending++;
+ }
+
range = ngx_array_push(&ctx->ranges);
if (range == NULL) {
return NGX_ERROR;
@@ -377,16 +387,6 @@ ngx_http_range_parse(ngx_http_request_t
range->start = start;
range->end = end;
- if (size > NGX_MAX_OFF_T_VALUE - (end - start)) {
- return NGX_HTTP_RANGE_NOT_SATISFIABLE;
- }
-
- size += end - start;
-
- if (ranges-- == 0) {
- return NGX_DECLINED;
- }
-
} else if (start == 0) {
return NGX_DECLINED;
}
@@ -400,7 +400,7 @@ ngx_http_range_parse(ngx_http_request_t
return NGX_HTTP_RANGE_NOT_SATISFIABLE;
}
- if (size > content_length) {
+ if (ctx->ranges.nelts > ranges || descending) {
return NGX_DECLINED;
}
@@ -469,6 +469,22 @@ ngx_http_range_multipart_header(ngx_http
ngx_http_range_t *range;
ngx_atomic_uint_t boundary;
+ if (ctx->index == (ngx_uint_t) -1) {
+ ctx->index = 0;
+ range = ctx->ranges.elts;
+
+ for (i = 0; i < ctx->ranges.nelts; i++) {
+ if (ctx->offset < range[i].end) {
+ ctx->index = i;
+ break;
+ }
+ }
+ }
+
+ if (r != r->main) {
+ return ngx_http_next_header_filter(r);
+ }
+
size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
+ sizeof(CRLF "Content-Type: ") - 1
+ r->headers_out.content_type.len
@@ -574,6 +590,7 @@ ngx_http_range_multipart_header(ngx_http
}
r->headers_out.content_length_n = len;
+ r->headers_out.content_offset = range[0].start;
if (r->headers_out.content_length) {
r->headers_out.content_length->hash = 0;
@@ -639,63 +656,11 @@ ngx_http_range_body_filter(ngx_http_requ
return ngx_http_range_singlepart_body(r, ctx, in);
}
- /*
- * multipart ranges are supported only if whole body is in a single buffer
- */
-
- if (ngx_buf_special(in->buf)) {
- return ngx_http_next_body_filter(r, in);
- }
-
- if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) {
- return NGX_ERROR;
- }
-
return ngx_http_range_multipart_body(r, ctx, in);
}
static ngx_int_t
-ngx_http_range_test_overlapped(ngx_http_request_t *r,
- ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
-{
- off_t start, last;
- ngx_buf_t *buf;
- ngx_uint_t i;
- ngx_http_range_t *range;
-
- if (ctx->offset) {
- goto overlapped;
- }
-
- buf = in->buf;
-
- if (!buf->last_buf) {
- start = ctx->offset;
- last = ctx->offset + ngx_buf_size(buf);
-
- range = ctx->ranges.elts;
- for (i = 0; i < ctx->ranges.nelts; i++) {
- if (start > range[i].start || last < range[i].end) {
- goto overlapped;
- }
- }
- }
-
- ctx->offset = ngx_buf_size(buf);
-
- return NGX_OK;
-
-overlapped:
-
- ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
- "range in overlapped buffers");
-
- return NGX_ERROR;
-}
-
-
-static ngx_int_t
ngx_http_range_singlepart_body(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
{
@@ -786,96 +751,206 @@ static ngx_int_t
ngx_http_range_multipart_body(ngx_http_request_t *r,
ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
{
- ngx_buf_t *b, *buf;
- ngx_uint_t i;
- ngx_chain_t *out, *hcl, *rcl, *dcl, **ll;
- ngx_http_range_t *range;
+ off_t start, last;
+ ngx_buf_t *buf, *b;
+ ngx_chain_t *out, *cl, *tl, **ll;
+ ngx_http_range_t *range, *tail;
+ out = NULL;
ll = &out;
- buf = in->buf;
+
range = ctx->ranges.elts;
-
- for (i = 0; i < ctx->ranges.nelts; i++) {
+ tail = range + ctx->ranges.nelts;
+ range += ctx->index;
- /*
- * The boundary header of the range:
- * CRLF
- * "--0123456789" CRLF
- * "Content-Type: image/jpeg" CRLF
- * "Content-Range: bytes "
- */
+ for (cl = in; cl; cl = cl->next) {
+
+ buf = cl->buf;
- b = ngx_calloc_buf(r->pool);
- if (b == NULL) {
- return NGX_ERROR;
+ start = ctx->offset;
+ last = ctx->offset + ngx_buf_size(buf);
+
+ ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+ "http range multipart body buf: %O-%O", start, last);
+
+ if (ngx_buf_special(buf)) {
+ continue;
}
- b->memory = 1;
- b->pos = ctx->boundary_header.data;
- b->last = ctx->boundary_header.data + ctx->boundary_header.len;
+ if (range->end <= start || range->start >= last) {
+
+ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+ "http range multipart body skip");
- hcl = ngx_alloc_chain_link(r->pool);
- if (hcl == NULL) {
- return NGX_ERROR;
+ if (buf->in_file) {
+ buf->file_pos = buf->file_last;
+ }
+
+ buf->pos = buf->last;
+ buf->sync = 1;
+
+ ctx->offset = last;
+ continue;
}
- hcl->buf = b;
+ if (range->start >= start) {
+ if (ngx_http_range_link_boundary_header(r, ctx, ll) != NGX_OK) {
+ return NGX_ERROR;
+ }
+
+ ll = &(*ll)->next->next;
- /* "SSSS-EEEE/TTTT" CRLF CRLF */
+ if (buf->in_file) {
+ buf->file_pos += range->start - start;
+ }
- b = ngx_calloc_buf(r->pool);
- if (b == NULL) {
- return NGX_ERROR;
+ if (ngx_buf_in_memory(buf)) {
+ buf->pos += (size_t) (range->start - start);
+ }
+
+ start = range->start;
}
- b->temporary = 1;
- b->pos = range[i].content_range.data;
- b->last = range[i].content_range.data + range[i].content_range.len;
+ ctx->offset = last;
+
+ if (range->end <= last) {
+
+ if (range + 1 < tail && range[1].start < last) {
+
+ ctx->offset = range->end;
+
+ b = ngx_alloc_buf(r->pool);
+ if (b == NULL) {
+ return NGX_ERROR;
+ }
- rcl = ngx_alloc_chain_link(r->pool);
- if (rcl == NULL) {
- return NGX_ERROR;
- }
+ tl = ngx_alloc_chain_link(r->pool);
+ if (tl == NULL) {
+ return NGX_ERROR;
+ }
+
+ tl->buf = b;
+ tl->next = cl;
+
+ ngx_memcpy(b, buf, sizeof(ngx_buf_t));
+ b->last_in_chain = 0;
+ b->last_buf = 0;
+
+ if (buf->in_file) {
+ buf->file_pos += range->end - start;
+ }
- rcl->buf = b;
+ if (ngx_buf_in_memory(buf)) {
+ buf->pos += (size_t) (range->end - start);
+ }
+ cl = tl;
+ buf = cl->buf;
+ }
+
+ if (buf->in_file) {
+ buf->file_last -= last - range->end;
+ }
- /* the range data */
+ if (ngx_buf_in_memory(buf)) {
+ buf->last -= (size_t) (last - range->end);
+ }
+
+ ctx->index++;
+ range++;
- b = ngx_calloc_buf(r->pool);
- if (b == NULL) {
- return NGX_ERROR;
+ if (range == tail) {
+ *ll = cl;
+ ll = &cl->next;
+
+ if (ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK) {
+ return NGX_ERROR;
+ }
+
+ break;
+ }
}
- b->in_file = buf->in_file;
- b->temporary = buf->temporary;
- b->memory = buf->memory;
- b->mmap = buf->mmap;
- b->file = buf->file;
+ *ll = cl;
+ ll = &cl->next;
+ }
+
+ if (out == NULL) {
+ return NGX_OK;
+ }
+
+ return ngx_http_next_body_filter(r, out);
+}
+
- if (buf->in_file) {
- b->file_pos = buf->file_pos + range[i].start;
- b->file_last = buf->file_pos + range[i].end;
- }
+static ngx_int_t
+ngx_http_range_link_boundary_header(ngx_http_request_t *r,
+ ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll)
+{
+ ngx_buf_t *b;
+ ngx_chain_t *hcl, *rcl;
+ ngx_http_range_t *range;
+
+ /*
+ * The boundary header of the range:
+ * CRLF
+ * "--0123456789" CRLF
+ * "Content-Type: image/jpeg" CRLF
+ * "Content-Range: bytes "
+ */
+
+ b = ngx_calloc_buf(r->pool);
+ if (b == NULL) {
+ return NGX_ERROR;
+ }
+
+ b->memory = 1;
+ b->pos = ctx->boundary_header.data;
+ b->last = ctx->boundary_header.data + ctx->boundary_header.len;
- if (ngx_buf_in_memory(buf)) {
- b->pos = buf->pos + (size_t) range[i].start;
- b->last = buf->pos + (size_t) range[i].end;
- }
+ hcl = ngx_alloc_chain_link(r->pool);
+ if (hcl == NULL) {
+ return NGX_ERROR;
+ }
+
+ hcl->buf = b;
+
+
+ /* "SSSS-EEEE/TTTT" CRLF CRLF */
+
+ b = ngx_calloc_buf(r->pool);
+ if (b == NULL) {
+ return NGX_ERROR;
+ }
+
+ range = ctx->ranges.elts;
+ b->temporary = 1;
+ b->pos = range[ctx->index].content_range.data;
+ b->last = range[ctx->index].content_range.data
+ + range[ctx->index].content_range.len;
- dcl = ngx_alloc_chain_link(r->pool);
- if (dcl == NULL) {
- return NGX_ERROR;
- }
+ rcl = ngx_alloc_chain_link(r->pool);
+ if (rcl == NULL) {
+ return NGX_ERROR;
+ }
+
+ rcl->buf = b;
+
+ rcl->next = NULL;
+ hcl->next = rcl;
+ *ll = hcl;
- dcl->buf = b;
+ return NGX_OK;
+}
+
- *ll = hcl;
- hcl->next = rcl;
- rcl->next = dcl;
- ll = &dcl->next;
- }
+static ngx_int_t
+ngx_http_range_link_last_boundary(ngx_http_request_t *r,
+ ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll)
+{
+ ngx_buf_t *b;
+ ngx_chain_t *hcl;
/* the last boundary CRLF "--0123456789--" CRLF */
@@ -885,7 +960,8 @@ ngx_http_range_multipart_body(ngx_http_r
}
b->temporary = 1;
- b->last_buf = 1;
+ b->last_in_chain = 1;
+ b->last_buf = (r == r->main) ? 1 : 0;
b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
+ sizeof("--" CRLF) - 1);
@@ -904,11 +980,11 @@ ngx_http_range_multipart_body(ngx_http_r
}
hcl->buf = b;
+
hcl->next = NULL;
-
*ll = hcl;
- return ngx_http_next_body_filter(r, out);
+ return NGX_OK;
}
------------------ Original ------------------
From: "Maxim Dounin";<mdounin@mdounin.ru>;
Send time: Wednesday, Nov 15, 2017 0:57 AM
To: "nginx-devel"<nginx-devel@nginx.org>;
Subject: Re: [patch-1] Range filter: support multiple ranges.
Hello!
On Fri, Nov 10, 2017 at 07:03:01PM +0800, 胡聪 (hucc) wrote:
> Hi,
>
> How about this as the first patch?
>
> # HG changeset patch
> # User hucongcong <hucong.c@foxmail.com>
> # Date 1510309868 -28800
> # Fri Nov 10 18:31:08 2017 +0800
> # Node ID c32fddd15a26b00f8f293f6b0d8762cd9f2bfbdb
> # Parent 32f83fe5747b55ef341595b18069bee3891874d0
> Range filter: support for multipart response in wider range.
>
> Before the patch multipart ranges are supported only if whole body
> is in a single buffer. Now, the limit is canceled. If there are no
> overlapping ranges and all ranges list in ascending order, nginx
> will return 206 with multipart response, otherwise return 200 (OK).
Introducing support for multipart ranges if the response body is
not in the single buffer as long as requested ranges do not
overlap and properly ordered looks like a much better idea to me.
That's basically what I have in mind as possible futher
enhancement of the range filter if we'll ever need better support
for multipart ranges.
There are various questions about the patch itself though, see
below.
> diff -r 32f83fe5747b -r c32fddd15a26 src/http/modules/ngx_http_range_filter_module.c
> --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800
> +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 18:31:08 2017 +0800
> @@ -54,6 +54,7 @@ typedef struct {
>
> typedef struct {
> off_t offset;
> + ngx_uint_t index; /* start with 1 */
> ngx_str_t boundary_header;
> ngx_array_t ranges;
> } ngx_http_range_filter_ctx_t;
> @@ -66,12 +67,14 @@ static ngx_int_t ngx_http_range_singlepa
> static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx);
> static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r);
> -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r,
> - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll);
> +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll);
>
> static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf);
> static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf);
> @@ -270,9 +273,8 @@ ngx_http_range_parse(ngx_http_request_t
> ngx_uint_t ranges)
> {
> u_char *p;
> - off_t start, end, size, content_length, cutoff,
> - cutlim;
> - ngx_uint_t suffix;
> + off_t start, end, content_length, cutoff, cutlim;
> + ngx_uint_t suffix, descending;
> ngx_http_range_t *range;
> ngx_http_range_filter_ctx_t *mctx;
>
> @@ -281,6 +283,7 @@ ngx_http_range_parse(ngx_http_request_t
> ngx_http_range_body_filter_module);
> if (mctx) {
> ctx->ranges = mctx->ranges;
> + ctx->boundary_header = ctx->boundary_header;
> return NGX_OK;
> }
> }
> @@ -292,7 +295,8 @@ ngx_http_range_parse(ngx_http_request_t
> }
>
> p = r->headers_in.range->value.data + 6;
> - size = 0;
> + range = NULL;
> + descending = 0;
> content_length = r->headers_out.content_length_n;
>
> cutoff = NGX_MAX_OFF_T_VALUE / 10;
> @@ -369,6 +373,11 @@ ngx_http_range_parse(ngx_http_request_t
> found:
>
> if (start < end) {
> +
> + if (range && start < range->end) {
> + descending++;
> + }
> +
> range = ngx_array_push(&ctx->ranges);
> if (range == NULL) {
> return NGX_ERROR;
> @@ -377,16 +386,6 @@ ngx_http_range_parse(ngx_http_request_t
> range->start = start;
> range->end = end;
>
> - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) {
> - return NGX_HTTP_RANGE_NOT_SATISFIABLE;
> - }
> -
> - size += end - start;
> -
> - if (ranges-- == 0) {
> - return NGX_DECLINED;
> - }
> -
> } else if (start == 0) {
> return NGX_DECLINED;
> }
> @@ -400,7 +399,7 @@ ngx_http_range_parse(ngx_http_request_t
> return NGX_HTTP_RANGE_NOT_SATISFIABLE;
> }
>
> - if (size > content_length) {
> + if (ctx->ranges.nelts > ranges || descending) {
> return NGX_DECLINED;
> }
This change basically disables support for non-ascending ranges.
As previously suggested, this will break various legitimate use
cases, and certainly this is not something we should do.
>
> @@ -469,6 +468,10 @@ ngx_http_range_multipart_header(ngx_http
> ngx_http_range_t *range;
> ngx_atomic_uint_t boundary;
>
> + if (r != r->main) {
> + return ngx_http_next_header_filter(r);
> + }
> +
> size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
> + sizeof(CRLF "Content-Type: ") - 1
> + r->headers_out.content_type.len
> @@ -570,10 +573,11 @@ ngx_http_range_multipart_header(ngx_http
> - range[i].content_range.data;
>
> len += ctx->boundary_header.len + range[i].content_range.len
> - + (range[i].end - range[i].start);
> + + (range[i].end - range[i].start);
This looks like an unrelated whitespace change.
> }
>
> r->headers_out.content_length_n = len;
> + r->headers_out.content_offset = range[0].start;
>
> if (r->headers_out.content_length) {
> r->headers_out.content_length->hash = 0;
> @@ -639,63 +643,15 @@ ngx_http_range_body_filter(ngx_http_requ
> return ngx_http_range_singlepart_body(r, ctx, in);
> }
>
> - /*
> - * multipart ranges are supported only if whole body is in a single buffer
> - */
> -
> if (ngx_buf_special(in->buf)) {
> return ngx_http_next_body_filter(r, in);
> }
The ngx_buf_special() check should not be needed here as long as
ngx_http_range_multipart_body() is modified to properly support
multiple buffers.
>
> - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) {
> - return NGX_ERROR;
> - }
> -
> return ngx_http_range_multipart_body(r, ctx, in);
> }
>
>
> static ngx_int_t
> -ngx_http_range_test_overlapped(ngx_http_request_t *r,
> - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> -{
> - off_t start, last;
> - ngx_buf_t *buf;
> - ngx_uint_t i;
> - ngx_http_range_t *range;
> -
> - if (ctx->offset) {
> - goto overlapped;
> - }
> -
> - buf = in->buf;
> -
> - if (!buf->last_buf) {
> - start = ctx->offset;
> - last = ctx->offset + ngx_buf_size(buf);
> -
> - range = ctx->ranges.elts;
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> - if (start > range[i].start || last < range[i].end) {
> - goto overlapped;
> - }
> - }
> - }
> -
> - ctx->offset = ngx_buf_size(buf);
> -
> - return NGX_OK;
> -
> -overlapped:
> -
> - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
> - "range in overlapped buffers");
> -
> - return NGX_ERROR;
> -}
> -
> -
> -static ngx_int_t
> ngx_http_range_singlepart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> {
> @@ -786,96 +742,227 @@ static ngx_int_t
> ngx_http_range_multipart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> {
> - ngx_buf_t *b, *buf;
> - ngx_uint_t i;
> - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll;
> - ngx_http_range_t *range;
> + off_t start, last, back;
> + ngx_buf_t *buf, *b;
> + ngx_uint_t i, finished;
> + ngx_chain_t *out, *cl, *ncl, **ll;
> + ngx_http_range_t *range, *tail;
>
> - ll = &out;
> - buf = in->buf;
> range = ctx->ranges.elts;
>
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> + if (!ctx->index) {
> + for (i = 0; i < ctx->ranges.nelts; i++) {
> + if (ctx->offset < range[i].end) {
> + ctx->index = i + 1;
> + break;
> + }
> + }
> + }
All this logic with using ctx->index as range index plus 1 looks
counter-intuitive and unneeded. A much better options would be
(in no particular order):
- use a special value to mean "uninitialized", like -1;
- always initialize ctx->index to 0 and move it futher to the next
range once we see that ctx->offset is larger than range[i].end;
- do proper initialization somewhere in
ngx_http_range_header_filter() or ngx_http_range_multipart_header().
> +
> + tail = range + ctx->ranges.nelts - 1;
> + range += ctx->index - 1;
> +
> + out = NULL;
> + ll = &out;
> + finished = 0;
>
> - /*
> - * The boundary header of the range:
> - * CRLF
> - * "--0123456789" CRLF
> - * "Content-Type: image/jpeg" CRLF
> - * "Content-Range: bytes "
> - */
> + for (cl = in; cl; cl = cl->next) {
> +
> + buf = cl->buf;
> +
> + start = ctx->offset;
> + last = ctx->offset + ngx_buf_size(buf);
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + ctx->offset = last;
> +
> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body buf: %O-%O", start, last);
> +
> + if (ngx_buf_special(buf)) {
> + *ll = cl;
> + ll = &cl->next;
> + continue;
> }
>
> - b->memory = 1;
> - b->pos = ctx->boundary_header.data;
> - b->last = ctx->boundary_header.data + ctx->boundary_header.len;
> + if (range->end <= start || range->start >= last) {
> +
> + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body skip");
>
> - hcl = ngx_alloc_chain_link(r->pool);
> - if (hcl == NULL) {
> - return NGX_ERROR;
> + if (buf->in_file) {
> + buf->file_pos = buf->file_last;
> + }
> +
> + buf->pos = buf->last;
> + buf->sync = 1;
> +
> + continue;
Looking at this code I tend to think that our existing
ngx_http_range_singlepart_body() implementation you've used as a
reference is incorrect. It removes buffers from the original
chain as passed to the filter - this can result in a buffer being
lost from tracking by the module who owns the buffer, and a
request hang if/when all available buffers will be lost. Instead,
it should either preserve all existing chain links, or create a
new chain. I'll take a look how to properly fix this.
> }
>
> - hcl->buf = b;
> + if (range->start >= start) {
>
> + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) {
> + return NGX_ERROR;
> + }
>
> - /* "SSSS-EEEE/TTTT" CRLF CRLF */
> + if (buf->in_file) {
> + buf->file_pos += range->start - start;
> + }
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + if (ngx_buf_in_memory(buf)) {
> + buf->pos += (size_t) (range->start - start);
> + }
> }
>
> - b->temporary = 1;
> - b->pos = range[i].content_range.data;
> - b->last = range[i].content_range.data + range[i].content_range.len;
> + if (range->end <= last) {
> +
> + if (range < tail && range[1].start < last) {
The "tail" name is not immediately obvious, and it might be better
idea to name it differently. Also, range[1] looks strange when we
are using range as a pointer and not array. Hopefully this test
will be unneeded when code will be cleaned up to avoid moving
ctx->offset backwards, see below.
> +
> + b = ngx_alloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
> +
> + ncl = ngx_alloc_chain_link(r->pool);
> + if (ncl == NULL) {
> + return NGX_ERROR;
> + }
Note: usual names for temporary chain links are "ln" and "tl".
>
> - rcl = ngx_alloc_chain_link(r->pool);
> - if (rcl == NULL) {
> - return NGX_ERROR;
> - }
> + ncl->buf = b;
> + ncl->next = cl;
> +
> + ngx_memcpy(b, buf, sizeof(ngx_buf_t));
> + b->last_in_chain = 0;
> + b->last_buf = 0;
> +
> + back = last - range->end;
> + ctx->offset -= back;
This looks like a hack, there should be no need to adjust
ctx->offset backwards. Instead, we should move ctx->offset only
when we've done with a buffer.
> +
> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body reuse buf: %O-%O",
> + ctx->offset, ctx->offset + back);
>
> - rcl->buf = b;
> + if (buf->in_file) {
> + buf->file_pos = buf->file_last - back;
> + }
> +
> + if (ngx_buf_in_memory(buf)) {
> + buf->pos = buf->last - back;
> + }
>
> + cl = ncl;
> + buf = cl->buf;
> + }
> +
> + if (buf->in_file) {
> + buf->file_last -= last - range->end;
> + }
>
> - /* the range data */
> + if (ngx_buf_in_memory(buf)) {
> + buf->last -= (size_t) (last - range->end);
> + }
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + if (range == tail) {
> + buf->last_buf = (r == r->main) ? 1 : 0;
> + buf->last_in_chain = 1;
> + *ll = cl;
> + ll = &cl->next;
> +
> + finished = 1;
It is not clear why to use the "finished" flag instead of adding
the boundary here.
> + break;
> + }
> +
> + range++;
> + ctx->index++;
> }
>
> - b->in_file = buf->in_file;
> - b->temporary = buf->temporary;
> - b->memory = buf->memory;
> - b->mmap = buf->mmap;
> - b->file = buf->file;
> + *ll = cl;
> + ll = &cl->next;
> + }
> +
> + if (out == NULL) {
> + return NGX_OK;
> + }
> +
> + *ll = NULL;
> +
> + if (finished
> + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK)
> + {
> + return NGX_ERROR;
> + }
> +
> + return ngx_http_next_body_filter(r, out);
> +}
> +
>
> - if (buf->in_file) {
> - b->file_pos = buf->file_pos + range[i].start;
> - b->file_last = buf->file_pos + range[i].end;
> - }
> +static ngx_int_t
> +ngx_http_range_link_boundary_header(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll)
The "ngx_chain_t ***lll" argument suggests it might be a good idea
to somehow improve the interface.
> +{
> + ngx_buf_t *b;
> + ngx_chain_t *hcl, *rcl;
> + ngx_http_range_t *range;
> +
> + /*
> + * The boundary header of the range:
> + * CRLF
> + * "--0123456789" CRLF
> + * "Content-Type: image/jpeg" CRLF
> + * "Content-Range: bytes "
> + */
> +
> + b = ngx_calloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
>
> - if (ngx_buf_in_memory(buf)) {
> - b->pos = buf->pos + (size_t) range[i].start;
> - b->last = buf->pos + (size_t) range[i].end;
> - }
> + b->memory = 1;
> + b->pos = ctx->boundary_header.data;
> + b->last = ctx->boundary_header.data + ctx->boundary_header.len;
> +
> + hcl = ngx_alloc_chain_link(r->pool);
> + if (hcl == NULL) {
> + return NGX_ERROR;
> + }
> +
> + hcl->buf = b;
> +
> +
> + /* "SSSS-EEEE/TTTT" CRLF CRLF */
> +
> + b = ngx_calloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
>
> - dcl = ngx_alloc_chain_link(r->pool);
> - if (dcl == NULL) {
> - return NGX_ERROR;
> - }
> + range = ctx->ranges.elts;
> + b->temporary = 1;
> + b->pos = range[ctx->index - 1].content_range.data;
> + b->last = range[ctx->index - 1].content_range.data
> + + range[ctx->index - 1].content_range.len;
> +
> + rcl = ngx_alloc_chain_link(r->pool);
> + if (rcl == NULL) {
> + return NGX_ERROR;
> + }
> +
> + rcl->buf = b;
>
> - dcl->buf = b;
> + **lll = hcl;
> + hcl->next = rcl;
> + *lll = &rcl->next;
> +
> + return NGX_OK;
> +}
>
> - *ll = hcl;
> - hcl->next = rcl;
> - rcl->next = dcl;
> - ll = &dcl->next;
> - }
> +
> +static ngx_int_t
> +ngx_http_range_link_last_boundary(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll)
> +{
> + ngx_buf_t *b;
> + ngx_chain_t *hcl;
>
> /* the last boundary CRLF "--0123456789--" CRLF */
>
> @@ -885,7 +972,8 @@ ngx_http_range_multipart_body(ngx_http_r
> }
>
> b->temporary = 1;
> - b->last_buf = 1;
> + b->last_in_chain = 1;
> + b->last_buf = (r == r->main) ? 1 : 0;
>
> b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
> + sizeof("--" CRLF) - 1);
> @@ -908,7 +996,7 @@ ngx_http_range_multipart_body(ngx_http_r
>
> *ll = hcl;
>
> - return ngx_http_next_body_filter(r, out);
> + return NGX_OK;
> }
>
>
> ------------------ Original ------------------
> From: "胡聪 (hucc)";<hucong.c@foxmail.com>;
> Send time: Friday, Nov 10, 2017 4:41 AM
> To: "nginx-devel"<nginx-devel@nginx.org>;
> Subject: Re: [patch-1] Range filter: support multiple ranges.
>
> Hi,
>
> Please ignore the previous reply. The updated patch is placed at the end.
>
> On Thursday, Nov 9, 2017 10:48 PM +0300 Maxim Dounin wrote:
>
> >On Fri, Oct 27, 2017 at 06:50:32PM +0800, 胡聪 (hucc) wrote:
> >
> >> # HG changeset patch
> >> # User hucongcong <hucong.c@foxmail.com>
> >> # Date 1509099940 -28800
> >> # Fri Oct 27 18:25:40 2017 +0800
> >> # Node ID 62c100a0d42614cd46f0719c0acb0ad914594217
> >> # Parent b9850d3deb277bd433a689712c40a84401443520
> >> Range filter: support multiple ranges.
> >
> >This summary line is at least misleading.
>
> Ok, maybe the summary line is support multiple ranges when body is
> in multiple buffers.
>
> >> When multiple ranges are requested, nginx will coalesce any of the ranges
> >> that overlap, or that are separated by a gap that is smaller than the
> >> NGX_HTTP_RANGE_MULTIPART_GAP macro.
> >
> >(Note that the patch also does reordering of ranges. For some
> >reason this is not mentioned in the commit log. There are also
> >other changes not mentioned in the commit log - for example, I see
> >ngx_http_range_t was moved to ngx_http_request.h. These are
> >probably do not belong to the patch at all.)
>
> I actually wait for you to give better advice. I tried my best to
> make the changes easier and more readable and I will split it into
> multiple patches based on your suggestions if these changes will be
> accepted.
>
> >Reordering and/or coalescing ranges is not something that clients
> >usually expect to happen. This was widely discussed at the time
> >of CVE-2011-3192 vulnerability in Apache. As a result, RFC 7233
> >introduced the "MAY coalesce" clause. But this doesn't make
> >clients, especially old ones, magically prepared for this.
>
> I did not know the CVE-2011-3192. If multiple ranges list in
> ascending order and there are no overlapping ranges, the code will
> be much simpler. This is what I think.
>
> >Moreover, this will certainly break some use cases like "request
> >some metadata first, and then rest of the file". So this is
> >certainly not a good idea to always reorder / coalesce ranges
> >unless this is really needed for some reason. (Or even at all,
> >as just returning 200 might be much more compatible with various
> >clients, as outlined above.)
> >
> >It is also not clear what you are trying to achieve with this
> >patch. You may want to elaborate more on what problem you are
> >trying to solve, may be there are better solutions.
>
> I am trying to support multiple ranges when proxy_buffering is off
> and sometimes slice is enabled. The data is always cached in the
> backend which is not nginx. As far as I know, similar architecture
> is widely used in CDN. So the implementation of multiple ranges in
> the architecture I mentioned above is required and inevitable.
> Besides, P2P clients desire for this feature to gather data-pieces.
> Hope I already made it clear.
>
> All these changes have been tested. Hope it helps! Temporarily,
> the changes are as follows:
>
> diff -r 32f83fe5747b src/http/modules/ngx_http_range_filter_module.c
> --- a/src/http/modules/ngx_http_range_filter_module.c Fri Oct 27 00:30:38 2017 +0800
> +++ b/src/http/modules/ngx_http_range_filter_module.c Fri Nov 10 04:31:52 2017 +0800
> @@ -46,16 +46,10 @@
>
>
> typedef struct {
> - off_t start;
> - off_t end;
> - ngx_str_t content_range;
> -} ngx_http_range_t;
> + off_t offset;
> + ngx_uint_t index; /* start with 1 */
>
> -
> -typedef struct {
> - off_t offset;
> - ngx_str_t boundary_header;
> - ngx_array_t ranges;
> + ngx_str_t boundary_header;
> } ngx_http_range_filter_ctx_t;
>
>
> @@ -66,12 +60,14 @@ static ngx_int_t ngx_http_range_singlepa
> static ngx_int_t ngx_http_range_multipart_header(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx);
> static ngx_int_t ngx_http_range_not_satisfiable(ngx_http_request_t *r);
> -static ngx_int_t ngx_http_range_test_overlapped(ngx_http_request_t *r,
> - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> static ngx_int_t ngx_http_range_singlepart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> static ngx_int_t ngx_http_range_multipart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in);
> +static ngx_int_t ngx_http_range_link_boundary_header(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll);
> +static ngx_int_t ngx_http_range_link_last_boundary(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll);
>
> static ngx_int_t ngx_http_range_header_filter_init(ngx_conf_t *cf);
> static ngx_int_t ngx_http_range_body_filter_init(ngx_conf_t *cf);
> @@ -234,7 +230,7 @@ parse:
> r->headers_out.status = NGX_HTTP_PARTIAL_CONTENT;
> r->headers_out.status_line.len = 0;
>
> - if (ctx->ranges.nelts == 1) {
> + if (r->headers_out.ranges->nelts == 1) {
> return ngx_http_range_singlepart_header(r, ctx);
> }
>
> @@ -270,9 +266,9 @@ ngx_http_range_parse(ngx_http_request_t
> ngx_uint_t ranges)
> {
> u_char *p;
> - off_t start, end, size, content_length, cutoff,
> - cutlim;
> - ngx_uint_t suffix;
> + off_t start, end, content_length,
> + cutoff, cutlim;
> + ngx_uint_t suffix, descending;
> ngx_http_range_t *range;
> ngx_http_range_filter_ctx_t *mctx;
>
> @@ -280,19 +276,21 @@ ngx_http_range_parse(ngx_http_request_t
> mctx = ngx_http_get_module_ctx(r->main,
> ngx_http_range_body_filter_module);
> if (mctx) {
> - ctx->ranges = mctx->ranges;
> + r->headers_out.ranges = r->main->headers_out.ranges;
> + ctx->boundary_header = mctx->boundary_header;
> return NGX_OK;
> }
> }
>
> - if (ngx_array_init(&ctx->ranges, r->pool, 1, sizeof(ngx_http_range_t))
> - != NGX_OK)
> - {
> + r->headers_out.ranges = ngx_array_create(r->pool, 1,
> + sizeof(ngx_http_range_t));
> + if (r->headers_out.ranges == NULL) {
> return NGX_ERROR;
> }
>
> p = r->headers_in.range->value.data + 6;
> - size = 0;
> + range = NULL;
> + descending = 0;
> content_length = r->headers_out.content_length_n;
>
> cutoff = NGX_MAX_OFF_T_VALUE / 10;
> @@ -369,7 +367,12 @@ ngx_http_range_parse(ngx_http_request_t
> found:
>
> if (start < end) {
> - range = ngx_array_push(&ctx->ranges);
> +
> + if (range && start < range->end) {
> + descending++;
> + }
> +
> + range = ngx_array_push(r->headers_out.ranges);
> if (range == NULL) {
> return NGX_ERROR;
> }
> @@ -377,16 +380,6 @@ ngx_http_range_parse(ngx_http_request_t
> range->start = start;
> range->end = end;
>
> - if (size > NGX_MAX_OFF_T_VALUE - (end - start)) {
> - return NGX_HTTP_RANGE_NOT_SATISFIABLE;
> - }
> -
> - size += end - start;
> -
> - if (ranges-- == 0) {
> - return NGX_DECLINED;
> - }
> -
> } else if (start == 0) {
> return NGX_DECLINED;
> }
> @@ -396,11 +389,15 @@ ngx_http_range_parse(ngx_http_request_t
> }
> }
>
> - if (ctx->ranges.nelts == 0) {
> + if (r->headers_out.ranges->nelts == 0) {
> return NGX_HTTP_RANGE_NOT_SATISFIABLE;
> }
>
> - if (size > content_length) {
> + if (r->headers_out.ranges->nelts > ranges) {
> + r->headers_out.ranges->nelts = ranges;
> + }
> +
> + if (descending) {
> return NGX_DECLINED;
> }
>
> @@ -439,7 +436,7 @@ ngx_http_range_singlepart_header(ngx_htt
>
> /* "Content-Range: bytes SSSS-EEEE/TTTT" header */
>
> - range = ctx->ranges.elts;
> + range = r->headers_out.ranges->elts;
>
> content_range->value.len = ngx_sprintf(content_range->value.data,
> "bytes %O-%O/%O",
> @@ -469,6 +466,10 @@ ngx_http_range_multipart_header(ngx_http
> ngx_http_range_t *range;
> ngx_atomic_uint_t boundary;
>
> + if (r != r->main) {
> + return ngx_http_next_header_filter(r);
> + }
> +
> size = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
> + sizeof(CRLF "Content-Type: ") - 1
> + r->headers_out.content_type.len
> @@ -551,8 +552,8 @@ ngx_http_range_multipart_header(ngx_http
>
> len = sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN + sizeof("--" CRLF) - 1;
>
> - range = ctx->ranges.elts;
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> + range = r->headers_out.ranges->elts;
> + for (i = 0; i < r->headers_out.ranges->nelts; i++) {
>
> /* the size of the range: "SSSS-EEEE/TTTT" CRLF CRLF */
>
> @@ -570,10 +571,11 @@ ngx_http_range_multipart_header(ngx_http
> - range[i].content_range.data;
>
> len += ctx->boundary_header.len + range[i].content_range.len
> - + (range[i].end - range[i].start);
> + + (range[i].end - range[i].start);
> }
>
> r->headers_out.content_length_n = len;
> + r->headers_out.content_offset = range[0].start;
>
> if (r->headers_out.content_length) {
> r->headers_out.content_length->hash = 0;
> @@ -635,67 +637,19 @@ ngx_http_range_body_filter(ngx_http_requ
> return ngx_http_next_body_filter(r, in);
> }
>
> - if (ctx->ranges.nelts == 1) {
> + if (r->headers_out.ranges->nelts == 1) {
> return ngx_http_range_singlepart_body(r, ctx, in);
> }
>
> - /*
> - * multipart ranges are supported only if whole body is in a single buffer
> - */
> -
> if (ngx_buf_special(in->buf)) {
> return ngx_http_next_body_filter(r, in);
> }
>
> - if (ngx_http_range_test_overlapped(r, ctx, in) != NGX_OK) {
> - return NGX_ERROR;
> - }
> -
> return ngx_http_range_multipart_body(r, ctx, in);
> }
>
>
> static ngx_int_t
> -ngx_http_range_test_overlapped(ngx_http_request_t *r,
> - ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> -{
> - off_t start, last;
> - ngx_buf_t *buf;
> - ngx_uint_t i;
> - ngx_http_range_t *range;
> -
> - if (ctx->offset) {
> - goto overlapped;
> - }
> -
> - buf = in->buf;
> -
> - if (!buf->last_buf) {
> - start = ctx->offset;
> - last = ctx->offset + ngx_buf_size(buf);
> -
> - range = ctx->ranges.elts;
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> - if (start > range[i].start || last < range[i].end) {
> - goto overlapped;
> - }
> - }
> - }
> -
> - ctx->offset = ngx_buf_size(buf);
> -
> - return NGX_OK;
> -
> -overlapped:
> -
> - ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
> - "range in overlapped buffers");
> -
> - return NGX_ERROR;
> -}
> -
> -
> -static ngx_int_t
> ngx_http_range_singlepart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> {
> @@ -706,7 +660,7 @@ ngx_http_range_singlepart_body(ngx_http_
>
> out = NULL;
> ll = &out;
> - range = ctx->ranges.elts;
> + range = r->headers_out.ranges->elts;
>
> for (cl = in; cl; cl = cl->next) {
>
> @@ -786,96 +740,227 @@ static ngx_int_t
> ngx_http_range_multipart_body(ngx_http_request_t *r,
> ngx_http_range_filter_ctx_t *ctx, ngx_chain_t *in)
> {
> - ngx_buf_t *b, *buf;
> - ngx_uint_t i;
> - ngx_chain_t *out, *hcl, *rcl, *dcl, **ll;
> - ngx_http_range_t *range;
> + off_t start, last, back;
> + ngx_buf_t *buf, *b;
> + ngx_uint_t i, finished;
> + ngx_chain_t *out, *cl, *ncl, **ll;
> + ngx_http_range_t *range, *tail;
> +
> + range = r->headers_out.ranges->elts;
>
> - ll = &out;
> - buf = in->buf;
> - range = ctx->ranges.elts;
> + if (!ctx->index) {
> + for (i = 0; i < r->headers_out.ranges->nelts; i++) {
> + if (ctx->offset < range[i].end) {
> + ctx->index = i + 1;
> + break;
> + }
> + }
> + }
>
> - for (i = 0; i < ctx->ranges.nelts; i++) {
> + tail = range + r->headers_out.ranges->nelts - 1;
> + range += ctx->index - 1;
>
> - /*
> - * The boundary header of the range:
> - * CRLF
> - * "--0123456789" CRLF
> - * "Content-Type: image/jpeg" CRLF
> - * "Content-Range: bytes "
> - */
> + out = NULL;
> + ll = &out;
> + finished = 0;
> +
> + for (cl = in; cl; cl = cl->next) {
> +
> + buf = cl->buf;
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + start = ctx->offset;
> + last = ctx->offset + ngx_buf_size(buf);
> +
> + ctx->offset = last;
> +
> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body buf: %O-%O", start, last);
> +
> + if (ngx_buf_special(buf)) {
> + *ll = cl;
> + ll = &cl->next;
> + continue;
> }
>
> - b->memory = 1;
> - b->pos = ctx->boundary_header.data;
> - b->last = ctx->boundary_header.data + ctx->boundary_header.len;
> + if (range->end <= start || range->start >= last) {
> +
> + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body skip");
>
> - hcl = ngx_alloc_chain_link(r->pool);
> - if (hcl == NULL) {
> - return NGX_ERROR;
> + if (buf->in_file) {
> + buf->file_pos = buf->file_last;
> + }
> +
> + buf->pos = buf->last;
> + buf->sync = 1;
> +
> + continue;
> }
>
> - hcl->buf = b;
> + if (range->start >= start) {
>
> + if (ngx_http_range_link_boundary_header(r, ctx, &ll) != NGX_OK) {
> + return NGX_ERROR;
> + }
>
> - /* "SSSS-EEEE/TTTT" CRLF CRLF */
> + if (buf->in_file) {
> + buf->file_pos += range->start - start;
> + }
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + if (ngx_buf_in_memory(buf)) {
> + buf->pos += (size_t) (range->start - start);
> + }
> }
>
> - b->temporary = 1;
> - b->pos = range[i].content_range.data;
> - b->last = range[i].content_range.data + range[i].content_range.len;
> + if (range->end <= last) {
> +
> + if (range < tail && range[1].start < last) {
> +
> + b = ngx_alloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
> +
> + ncl = ngx_alloc_chain_link(r->pool);
> + if (ncl == NULL) {
> + return NGX_ERROR;
> + }
>
> - rcl = ngx_alloc_chain_link(r->pool);
> - if (rcl == NULL) {
> - return NGX_ERROR;
> - }
> + ncl->buf = b;
> + ncl->next = cl;
> +
> + ngx_memcpy(b, buf, sizeof(ngx_buf_t));
> + b->last_in_chain = 0;
> + b->last_buf = 0;
> +
> + back = last - range->end;
> + ctx->offset -= back;
> +
> + ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "http range multipart body reuse buf: %O-%O",
> + ctx->offset, ctx->offset + back);
>
> - rcl->buf = b;
> + if (buf->in_file) {
> + buf->file_pos = buf->file_last - back;
> + }
> +
> + if (ngx_buf_in_memory(buf)) {
> + buf->pos = buf->last - back;
> + }
>
> + cl = ncl;
> + buf = cl->buf;
> + }
> +
> + if (buf->in_file) {
> + buf->file_last -= last - range->end;
> + }
>
> - /* the range data */
> + if (ngx_buf_in_memory(buf)) {
> + buf->last -= (size_t) (last - range->end);
> + }
>
> - b = ngx_calloc_buf(r->pool);
> - if (b == NULL) {
> - return NGX_ERROR;
> + if (range == tail) {
> + buf->last_buf = (r == r->main) ? 1 : 0;
> + buf->last_in_chain = 1;
> + *ll = cl;
> + ll = &cl->next;
> +
> + finished = 1;
> + break;
> + }
> +
> + range++;
> + ctx->index++;
> }
>
> - b->in_file = buf->in_file;
> - b->temporary = buf->temporary;
> - b->memory = buf->memory;
> - b->mmap = buf->mmap;
> - b->file = buf->file;
> + *ll = cl;
> + ll = &cl->next;
> + }
> +
> + if (out == NULL) {
> + return NGX_OK;
> + }
> +
> + *ll = NULL;
> +
> + if (finished
> + && ngx_http_range_link_last_boundary(r, ctx, ll) != NGX_OK)
> + {
> + return NGX_ERROR;
> + }
> +
> + return ngx_http_next_body_filter(r, out);
> +}
> +
>
> - if (buf->in_file) {
> - b->file_pos = buf->file_pos + range[i].start;
> - b->file_last = buf->file_pos + range[i].end;
> - }
> +static ngx_int_t
> +ngx_http_range_link_boundary_header(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t ***lll)
> +{
> + ngx_buf_t *b;
> + ngx_chain_t *hcl, *rcl;
> + ngx_http_range_t *range;
> +
> + /*
> + * The boundary header of the range:
> + * CRLF
> + * "--0123456789" CRLF
> + * "Content-Type: image/jpeg" CRLF
> + * "Content-Range: bytes "
> + */
> +
> + b = ngx_calloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
>
> - if (ngx_buf_in_memory(buf)) {
> - b->pos = buf->pos + (size_t) range[i].start;
> - b->last = buf->pos + (size_t) range[i].end;
> - }
> + b->memory = 1;
> + b->pos = ctx->boundary_header.data;
> + b->last = ctx->boundary_header.data + ctx->boundary_header.len;
> +
> + hcl = ngx_alloc_chain_link(r->pool);
> + if (hcl == NULL) {
> + return NGX_ERROR;
> + }
> +
> + hcl->buf = b;
> +
> +
> + /* "SSSS-EEEE/TTTT" CRLF CRLF */
> +
> + b = ngx_calloc_buf(r->pool);
> + if (b == NULL) {
> + return NGX_ERROR;
> + }
>
> - dcl = ngx_alloc_chain_link(r->pool);
> - if (dcl == NULL) {
> - return NGX_ERROR;
> - }
> + range = r->headers_out.ranges->elts;
> + b->temporary = 1;
> + b->pos = range[ctx->index - 1].content_range.data;
> + b->last = range[ctx->index - 1].content_range.data
> + + range[ctx->index - 1].content_range.len;
> +
> + rcl = ngx_alloc_chain_link(r->pool);
> + if (rcl == NULL) {
> + return NGX_ERROR;
> + }
> +
> + rcl->buf = b;
>
> - dcl->buf = b;
> + **lll = hcl;
> + hcl->next = rcl;
> + *lll = &rcl->next;
> +
> + return NGX_OK;
> +}
>
> - *ll = hcl;
> - hcl->next = rcl;
> - rcl->next = dcl;
> - ll = &dcl->next;
> - }
> +
> +static ngx_int_t
> +ngx_http_range_link_last_boundary(ngx_http_request_t *r,
> + ngx_http_range_filter_ctx_t *ctx, ngx_chain_t **ll)
> +{
> + ngx_buf_t *b;
> + ngx_chain_t *hcl;
>
> /* the last boundary CRLF "--0123456789--" CRLF */
>
> @@ -885,7 +970,8 @@ ngx_http_range_multipart_body(ngx_http_r
> }
>
> b->temporary = 1;
> - b->last_buf = 1;
> + b->last_in_chain = 1;
> + b->last_buf = (r == r->main) ? 1 : 0;
>
> b->pos = ngx_pnalloc(r->pool, sizeof(CRLF "--") - 1 + NGX_ATOMIC_T_LEN
> + sizeof("--" CRLF) - 1);
> @@ -908,7 +994,7 @@ ngx_http_range_multipart_body(ngx_http_r
>
> *ll = hcl;
>
> - return ngx_http_next_body_filter(r, out);
> + return NGX_OK;
> }
>
>
> diff -r 32f83fe5747b src/http/modules/ngx_http_slice_filter_module.c
> --- a/src/http/modules/ngx_http_slice_filter_module.c Fri Oct 27 00:30:38 2017 +0800
> +++ b/src/http/modules/ngx_http_slice_filter_module.c Fri Nov 10 04:31:52 2017 +0800
> @@ -22,6 +22,8 @@ typedef struct {
> ngx_str_t etag;
> unsigned last:1;
> unsigned active:1;
> + unsigned multipart:1;
> + ngx_uint_t index;
> ngx_http_request_t *sr;
> } ngx_http_slice_ctx_t;
>
> @@ -103,7 +105,9 @@ ngx_http_slice_header_filter(ngx_http_re
> {
> off_t end;
> ngx_int_t rc;
> + ngx_uint_t i;
> ngx_table_elt_t *h;
> + ngx_http_range_t *range;
> ngx_http_slice_ctx_t *ctx;
> ngx_http_slice_loc_conf_t *slcf;
> ngx_http_slice_content_range_t cr;
> @@ -182,27 +186,48 @@ ngx_http_slice_header_filter(ngx_http_re
>
> r->allow_ranges = 1;
> r->subrequest_ranges = 1;
> - r->single_range = 1;
>
> rc = ngx_http_next_header_filter(r);
>
> - if (r != r->main) {
> - return rc;
> + if (r == r->main) {
> + r->preserve_body = 1;
> +
> + if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
> + ctx->multipart = (r->headers_out.ranges->nelts != 1);
> + range = r->headers_out.ranges->elts;
> +
> + if (ctx->start + (off_t) slcf->size <= range[0].start) {
> + ctx->start = slcf->size * (range[0].start / slcf->size);
> + }
> +
> + ctx->end = range[r->headers_out.ranges->nelts - 1].end;
> +
> + } else {
> + ctx->end = cr.complete_length;
> + }
> }
>
> - r->preserve_body = 1;
> + if (ctx->multipart) {
> + range = r->headers_out.ranges->elts;
> +
> + for (i = ctx->index; i < r->headers_out.ranges->nelts - 1; i++) {
> +
> + if (ctx->start < range[i].end) {
> + ctx->index = i;
> + break;
> + }
>
> - if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
> - if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) {
> - ctx->start = slcf->size
> - * (r->headers_out.content_offset / slcf->size);
> + if (ctx->start + (off_t) slcf->size <= range[i + 1].start) {
> + i++;
> + ctx->index = i;
> + ctx->start = slcf->size * (range[i].start / slcf->size);
> +
> + ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
> + "range multipart so fast forward to %O-%O @%O",
> + range[i].start, range[i].end, ctx->start);
> + break;
> + }
> }
> -
> - ctx->end = r->headers_out.content_offset
> - + r->headers_out.content_length_n;
> -
> - } else {
> - ctx->end = cr.complete_length;
> }
>
> return rc;
> diff -r 32f83fe5747b src/http/ngx_http_request.h
> --- a/src/http/ngx_http_request.h Fri Oct 27 00:30:38 2017 +0800
> +++ b/src/http/ngx_http_request.h Fri Nov 10 04:31:52 2017 +0800
> @@ -251,6 +251,13 @@ typedef struct {
>
>
> typedef struct {
> + off_t start;
> + off_t end;
> + ngx_str_t content_range;
> +} ngx_http_range_t;
> +
> +
> +typedef struct {
> ngx_list_t headers;
> ngx_list_t trailers;
>
> @@ -278,6 +285,7 @@ typedef struct {
> u_char *content_type_lowcase;
> ngx_uint_t content_type_hash;
>
> + ngx_array_t *ranges; /* ngx_http_range_t */
> ngx_array_t cache_control;
>
> off_t content_length_n;
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
> _______________________________________________
> nginx-devel mailing list
> nginx-devel@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-devel
--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel
↧
↧
Re: nginx-1.13.7
Maxim Dounin Wrote:
-------------------------------------------------------
> Изменения в nginx 1.13.7
> 21.11.2017
> *) Исправление: nginx возвращал ошибку 500, если в директиве
> xslt_stylesheet были заданы параметры без использования
> переменных.
Я прошу прощения, а nginx умеет делать xsl трансформацию?
-------------------------------------------------------
> Изменения в nginx 1.13.7
> 21.11.2017
> *) Исправление: nginx возвращал ошибку 500, если в директиве
> xslt_stylesheet были заданы параметры без использования
> переменных.
Я прошу прощения, а nginx умеет делать xsl трансформацию?
↧
Re: nginx-1.13.7
Hello!
On Tue, Nov 21, 2017 at 12:18:41PM -0500, vitcool wrote:
> Maxim Dounin Wrote:
> -------------------------------------------------------
> > Изменения в nginx 1.13.7
> > 21.11.2017
> > *) Исправление: nginx возвращал ошибку 500, если в директиве
> > xslt_stylesheet были заданы параметры без использования
> > переменных.
>
>
> Я прошу прощения, а nginx умеет делать xsl трансформацию?
Да, начиная с 0.7.8.
Подробнее тут:
http://nginx.org/ru/docs/http/ngx_http_xslt_module.html
--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
On Tue, Nov 21, 2017 at 12:18:41PM -0500, vitcool wrote:
> Maxim Dounin Wrote:
> -------------------------------------------------------
> > Изменения в nginx 1.13.7
> > 21.11.2017
> > *) Исправление: nginx возвращал ошибку 500, если в директиве
> > xslt_stylesheet были заданы параметры без использования
> > переменных.
>
>
> Я прошу прощения, а nginx умеет делать xsl трансформацию?
Да, начиная с 0.7.8.
Подробнее тут:
http://nginx.org/ru/docs/http/ngx_http_xslt_module.html
--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru
↧
Re: [nginx-announce] nginx-1.13.7
Hello Nginx users,
Now available: Nginx 1.13.7 for Windows https://kevinworthington.com/n
ginxwin1137 (32-bit and 64-bit versions)
These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.
Announcements are also available here:
Twitter http://twitter.com/kworthington
Google+ https://plus.google.com/+KevinWorthington/
Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
https://kevinworthington.com/
https://twitter.com/kworthington
https://plus.google.com/+KevinWorthington/
On Tue, Nov 21, 2017 at 10:26 AM, Maxim Dounin <mdounin@mdounin.ru> wrote:
> Changes with nginx 1.13.7 21 Nov
> 2017
>
> *) Bugfix: in the $upstream_status variable.
>
> *) Bugfix: a segmentation fault might occur in a worker process if a
> backend returned a "101 Switching Protocols" response to a
> subrequest.
>
> *) Bugfix: a segmentation fault occurred in a master process if a
> shared
> memory zone size was changed during a reconfiguration and the
> reconfiguration failed.
>
> *) Bugfix: in the ngx_http_fastcgi_module.
>
> *) Bugfix: nginx returned the 500 error if parameters without variables
> were specified in the "xslt_stylesheet" directive.
>
> *) Workaround: "gzip filter failed to use preallocated memory" alerts
> appeared in logs when using a zlib library variant from Intel.
>
> *) Bugfix: the "worker_shutdown_timeout" directive did not work when
> using mail proxy and when proxying WebSocket connections.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx-announce mailing list
> nginx-announce@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Now available: Nginx 1.13.7 for Windows https://kevinworthington.com/n
ginxwin1137 (32-bit and 64-bit versions)
These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.
Announcements are also available here:
Twitter http://twitter.com/kworthington
Google+ https://plus.google.com/+KevinWorthington/
Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
https://kevinworthington.com/
https://twitter.com/kworthington
https://plus.google.com/+KevinWorthington/
On Tue, Nov 21, 2017 at 10:26 AM, Maxim Dounin <mdounin@mdounin.ru> wrote:
> Changes with nginx 1.13.7 21 Nov
> 2017
>
> *) Bugfix: in the $upstream_status variable.
>
> *) Bugfix: a segmentation fault might occur in a worker process if a
> backend returned a "101 Switching Protocols" response to a
> subrequest.
>
> *) Bugfix: a segmentation fault occurred in a master process if a
> shared
> memory zone size was changed during a reconfiguration and the
> reconfiguration failed.
>
> *) Bugfix: in the ngx_http_fastcgi_module.
>
> *) Bugfix: nginx returned the 500 error if parameters without variables
> were specified in the "xslt_stylesheet" directive.
>
> *) Workaround: "gzip filter failed to use preallocated memory" alerts
> appeared in logs when using a zlib library variant from Intel.
>
> *) Bugfix: the "worker_shutdown_timeout" directive did not work when
> using mail proxy and when proxying WebSocket connections.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx-announce mailing list
> nginx-announce@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
↧