Quantcast
Channel: Nginx Forum
Viewing all 53287 articles
Browse latest View live

Re: [PATCH] Chunked filter: check if ctx is null

$
0
0
Hello!

On Wed, Jan 03, 2018 at 07:53:00PM +0100, Jan Prachař wrote:

> There exists a path which brings you to the body filter in the chunked
> filter module while the module ctx is null, which results in segfault.
>
> If during piping chunked response from upstream to downstream both
> upstream and downstream error happens, internal redirect to a named
> location is performed (accoring to the directive error_page) and
> module's contexts cleared. If you have a lua handler in that location,
> it
> starts sending a body, because headers was already sent. A crash in the
> chunked filter module follows, because ctx is NULL.
>
> Maybe there is also a problem in the lua module and it should call
> header filters first. Also maybe nginx should not perform internal
> redirect, if part of the body was already sent.
>
> But better safe than sorry :) I found that the same checks are in body
> filters of other core modules too.

Trying to fix the chunked filter to tolerate such incorrect
behaviour looks like a bad idea. We can't reasonably assume all
filters are prepared to this. And even if we'll be able to modify
them all - if the connection remains open in such a situation, the
resulting mess at the protocol level will likely cause other
problems, including security ones. As such, the root cause should
be fixed instead.

To catch cases when a duplicate response is returned after the
header was already sent we have a dedicated check in the
ngx_http_send_header() function, see this commit for details:

http://hg.nginx.org/nginx/rev/03ff14058272

Trying to bypass this check is a bad idea. The same applies to
conditionally sending headers based on the r->headers_sent flag,
as it will mean that the check will be bypassed. This is what the
lua module seems to be doing, and it should be fixed to avoid
doing this.

The other part of the equation is how and why error_page is called
after the header as already sent. If you know a scenario where
error_page can be called with the header already sent, you may
want focus on reproducing and fixing this. Normally this is
expected to result in the "header already sent" alerts produced by
the check discussed above.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Allow the usen of the "env" directive in contexts other than "main"

$
0
0
Hi folks,

Is there a reason why the "env" directive is only allowed inside the "main"
contexts?

It would simplify many of my Docker deployments if I could do away with sed
and envsubst and use the "env" directive directly.

If the maintainers approve the inclusion of this feature in Nginx, I would
like to offer my time to this project by implementing this functionality.

Regards, German Jaber
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx latency/performance issues

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Пользователь на папку?

$
0
0
Добрый день.Только знакомлюсь с nginx.
Подскажите пожалуйста какого пользователя правильно использовать для папок nginx или www-data

Re: 504 gateway timeouts

$
0
0
The version that is on the ubuntu servers was 1.10.xx. I just updated it to

nginx version: nginx/1.13.8

And I am still having the same issue.

How do I "Try to flush out some output early on so that nginx will know
that Tomcat is alive."

The nginx and tomcat connection is working fine for all requests/responses
that take less than 60 seconds.

On Wed, Dec 27, 2017 at 4:18 PM, Igal @ Lucee.org <igal@lucee.org> wrote:

> On 12/27/2017 2:03 PM, Wade Girard wrote:
>
> I am using nginx on an ubuntu server as a proxy to a tomcat server.
>
> The nginx server is setup for https.
>
> I don't know how to determine what version of nginx I am using, but I
> install it on the ubuntu 1.16 server using apt-get.
>
> Run: nginx -v
>
>
> I have an issue that I have resolved locally on my Mac (using version 1.12
> of nginx and Tomcat 7) where requests through the proxy that take more than
> 60 seconds were failing, they are now working.
>
> What seemed to be the fix was adding the following to the nginx.conf file
>
> proxy_connect_timeout 600;
>
> proxy_send_timeout 600;
>
> proxy_read_timeout 600;
>
> send_timeout 600;
>
> in the location section for my proxy.
>
>
> However this same change in the ubuntu servers has no effect at all.
>
> Try to flush out some output early on so that nginx will know that Tomcat
> is alive.
>
> Igal Sapir
> Lucee Core Developer
> Lucee.org http://lucee.org/
>
>
>


--
Wade Girard
c: 612.363.0902
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

how to enable http2 for two server hosted on the same IP

$
0
0
Hi All,

If I use

server {
listen 443 accept_filter=dataready ssl http2;
}
server {
listen 443 http2 sndbuf=512k;
}

I'll get error
duplicate listen options for 0.0.0.0:443

I know it's caused by http2 in server 2. But how can I enable http2 on two servers?

Is an IP address valid for server_name?

$
0
0
So I will be using nginx as a reverse proxy. I do not have a domain name for my server yet. I am in development.

Can I use the IP address such as the following in /etc/nginx/sites-available/default:

server {
listen 80;

server_name 1.2.3.4; //Obvious fake IP.

location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

Thanks in advance for your reply!

Ray

Re: Conditionally include conf files in ngin

$
0
0
Hi, you may use the include directive with some glob() pattern tricks

For instance, replace this

include /path/to/something/nonexisting.conf

with

include /path/to/something/nonexisting[.]conf

the config parser won't complain if such file does not exist.

Re: Пользователь на папку?

$
0
0
Зависит от того, какой пользователь прописан в конфиге.
Ну и от задачи сильно зависит.

2018-01-04 1:19 GMT-06:00 Pa1amar <nginx-forum@forum.nginx.org>:

> Добрый день.Только знакомлюсь с nginx.
> Подскажите пожалуйста какого пользователя правильно использовать для папок
> nginx или www-data
>
> Posted at Nginx Forum: https://forum.nginx.org/read.
> php?21,277989,277989#msg-277989
>
> _______________________________________________
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru




--
With best regards,
Dmitriy Lyalyuev
https://lyalyuev.info
_______________________________________________
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: how to enable http2 for two server hosted on the same IP

$
0
0
meteor8488:

> Hi All,
>
> If I use
>
> server {
> listen 443 accept_filter=dataready ssl http2;
> }
> server {
> listen 443 http2 sndbuf=512k;
> }
>
> I'll get error
> duplicate listen options for 0.0.0.0:443
>
> I know it's caused by http2 in server 2.

probably you're wrong. The error is to specify sndbuf in the second server.

from https://nginx.org/r/listen:
The listen directive can have several additional parameters
specific to socket-related system calls.
These parameters can be specified in any listen directive, but
only once for a given address:port pair.

"but only once for a given address:port pair" is the point!

multiple options: ssl, http2, spdy, proxy_protocol
single options: setfib, fastopen, backlog, ...

Andreas


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx latency/performance issues

$
0
0
Hello!

On Wed, Jan 03, 2018 at 09:01:16PM -0500, Ameer Antar wrote:

> Hello All,
> First time subscriber here, as I am new to using nginx. My question is
> about latency/performance issue as compared to our previously
> configured Apache server. Our web application is running on Apache +
> PHP-FPM, and we are planning on publishing a new site which contains
> only static files "published" from within the web application on a set
> interval. Thinking of saving the overhead of apache and php, I've setup
> nginx-light package on ubuntu and configured the server with minimal
> changes from default. Just to see what kind of improvement we have, I
> compared avg response times for the same static javascript file and
> noticed a difference, but the opposite of what I expected: ~128ms from
> apache and ~142ms from nginx. I've also tested with php enabled on
> nginx and seen about the same results.
> There must be something not right. I've looked at a lot performance
> tips, but no huge difference. The only minor help was switching to
> buffered logging, but the difference is probably close to the margin of
> error. Can anyone help?

Depending on what and how you've measured, the difference between
~128ms and ~142ms might be either non-significant, or explainable by
different settings, or have some other explanation. You may want
to elaborate a bit more on what and how you are measuring. Also,
you may want to measure more details to better understand where
the time is spent.

In any case, both values are larger than 100ms, and this suggests
that you aren't measuring local latency. Likely, most of the
latency is network-related, and switching servers won't help much
here. In particular, if you are measuring latency on real users
within your web application, the difference might be due to no
established connection and/or no cached SSL session to a separate
static site.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Allow the usen of the "env" directive in contexts other than "main"

$
0
0
Hello!

On Thu, Jan 04, 2018 at 01:22:15AM +0000, German Jaber wrote:

> Is there a reason why the "env" directive is only allowed inside the "main"
> contexts?

The "env" directive controls which environment variables will be
available in the nginx worker processes. Environment variables
apply to the whole worker process, and are not differentiated based on
what the worker process is doing in the particular time. As such,
these directives are to be specified at the global level.

> It would simplify many of my Docker deployments if I could do away with sed
> and envsubst and use the "env" directive directly.
>
> If the maintainers approve the inclusion of this feature in Nginx, I would
> like to offer my time to this project by implementing this functionality.

Sorry, from your description it is not clear what you are trying
to do and how the "env" directive can help here. You may want to
elaborate on this.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Is an IP address valid for server_name?

Re: Is an IP address valid for server_name?

$
0
0
Thanks you sir. The answer I was hoping for.

Ray

Re: [PATCH] Chunked filter: check if ctx is null

$
0
0
Hello, thank you for response!

On Thu, 2018-01-04 at 03:42 +0300, Maxim Dounin wrote:
> Hello!
>
> On Wed, Jan 03, 2018 at 07:53:00PM +0100, Jan Pracha=C5=99 wrote:
>
> To catch cases when a duplicate response is returned after the
> header was already sent we have a dedicated check in the
> ngx_http_send_header() function, see this commit for details:
>
> http://hg.nginx.org/nginx/rev/03ff14058272
>
> Trying to bypass this check is a bad idea. The same applies to
> conditionally sending headers based on the r->headers_sent flag,
> as it will mean that the check will be bypassed. This is what the
> lua module seems to be doing, and it should be fixed to avoid
> doing this.

Lua module checks r->header_sent in function
ngx_http_lua_send_header_if_needed(), which is called with every
output. See
https://github.com/openresty/lua-nginx-module/commit/235875b5c6afd49611
81fa9ead9c167dc865e737

So you suggest, that they should have their own flag (like they already
had - ctx->headers_sent) and always call ngx_http_send_header()
function, if this flag is not set?

> The other part of the equation is how and why error_page is called
> after the header as already sent. If you know a scenario where
> error_page can be called with the header already sent, you may
> want focus on reproducing and fixing this. Normally this is
> expected to result in the "header already sent" alerts produced by
> the check discussed above.

On the nginx side it is cause by this:

http://hg.nginx.org/nginx/rev/ad3f342f14ba046c

If writing to client returns an error and thus u->pipe-
>downstream_error is 1 and then reading from upstream fails and thus u-
>pipe->upstream_error is 1. ngx_http_upstream_finalize_request() is
then called with rc=NGX_HTTP_BAD_GATEWAY, where thanks to the above
commit the ngx_http_finalize_request() function is called also with
rc=NGX_HTTP_BAD_GATEWAY and thus error_page is called (if it is
configured for 502 status).

I think, that the ngx_http_finalize_request() function should be called
with rc=NGX_ERROR in this case.

--
Jan Prachar
_______________________________________________
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: nginx latency/performance issues

$
0
0
Hello! Thanks for your response. I'm using apache bench for the tests, simply hitting the same static javascript file (no php). I was thinking since I'm using the same location and as long as the tests are repeatable, using remote testing would be ok and give more realistic results.

Both apache and nginx are on the same machine, just using different IP aliases so I can connect to both via port 443. After more detective work, I think I've narrowed the problem to the aliasing. The first alias which nginx is on is slower than the second where apache is. When I placed nginx on the same ip, but different port than apache, the speed is much better. There must be some ip address priority as the nginx server is new and has zero traffic on it. This is probably out of scope, but if you have any other thoughts or advice, let me know.

Thanks again for your help on this.

-Ameer

----- Original Message -----
From: Maxim Dounin <mdounin@mdounin.ru>
To: nginx@nginx.org
Subject: Re: nginx latency/performance issues
Date: 1/4/18 11:52:44 AM

>Depending on what and how you've measured, the difference between
>~128ms and ~142ms might be either non-significant, or explainable by
>different settings, or have some other explanation. You may want
>to elaborate a bit more on what and how you are measuring. Also,
>you may want to measure more details to better understand where
>the time is spent.
>
>In any case, both values are larger than 100ms, and this suggests
>that you aren't measuring local latency. Likely, most of the
>latency is network-related, and switching servers won't help much
>here. In particular, if you are measuring latency on real users
>within your web application, the difference might be due to no
>established connection and/or no cached SSL session to a separate
>static site.
>
>--
>Maxim Dounin
>http://mdounin.ru/
>_______________________________________________
>nginx mailing list
>nginx@nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: how do I run multiple https web sites on a single IP address

$
0
0
I fixed now the problem, not sure is the best way but at least working.

In the two server https block you need to put all cert information (ssl_certificate bla bla) in domain2.com and www.domain2.com.

I just only put cert information in www.domain2.com and domain2.com only redirect in what i put in the example config in my initial thread.

I tried to simplify the config to only use less server block possible but seems i do worse because of that.

IPv6 does not work correctly with nginx

$
0
0
Hello,
I'm trying to finish to configure nginx for ipv6
listen [::]:443 ssl;doesn't workbutlisten [fc00:1:1::13]:443 ssl;works
I need to explicitly specify the ipv6 address whereas in ipv4 I don't need to
# nginx -V
nginx version: nginx/1.12.1

server {
    listen 443 ssl;
#    listen [::]:443 ssl;
    listen [fc00:1:1::13]:443 ssl;
    server_name test.mydomain.org;
    root /var/www/html;
# ifconfig vmx0
vmx0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
....inet6 fc00:1:1::13 prefixlen 64

Does someone knows why ?

Thank you


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 504 gateway timeouts

$
0
0
> The version that is on the ubuntu servers was 1.10.xx. I just updated it
to
>
> nginx version: nginx/1.13.8
>
> And I am still having the same issue.
>
> How do I "Try to flush out some output early on so that nginx will know
that Tomcat is alive."
>
> The nginx and tomcat connection is working fine for all
requests/responses that take less t

Maybe you can flush out the HTTP response headers quickly.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx latency/performance issues

$
0
0
Hello!

On Thu, Jan 04, 2018 at 06:12:38PM -0500, eFX News Development wrote:

> Hello! Thanks for your response. I'm using apache bench for the
> tests, simply hitting the same static javascript file (no php).
> I was thinking since I'm using the same location and as long as
> the tests are repeatable, using remote testing would be ok and
> give more realistic results.

With ApacheBench on an SSL host you are likely testing your SSL
configuration. Or, rather, performance of handshakes with most
secure ciphersuite available in your OpenSSL library. Try
looking into detailed timing information provided by ApacheBench,
it should show that most of the time is spend in the "Connect"
phase - which in fact means that the time is spent in SSL
hadshakes. Also try using keepalive connections with "ab -k" to
see a drammatic difference.

> Both apache and nginx are on the same machine, just using
> different IP aliases so I can connect to both via port 443.
> After more detective work, I think I've narrowed the problem to
> the aliasing. The first alias which nginx is on is slower than
> the second where apache is. When I placed nginx on the same ip,
> but different port than apache, the speed is much better. There
> must be some ip address priority as the nginx server is new and
> has zero traffic on it. This is probably out of scope, but if
> you have any other thoughts or advice, let me know.

First of all, check if your results are statistically significant.
That is, take a look at the "+/-sd" column in the ApacheBench
detailed output. Alternatively, run both tests at least three
times and compare resulting numbers using ministat(1).

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 53287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>