Fixes a regression from 2.4.59 (r1913907).
For a reverse proxy setup with a worker (enablereuse=on) and a
forward/CONNECT ProxyRemote to reach it, an open connection/tunnel
to/through the remote proxy for the same origin server (and using the
same proxy auth) should be reusable. Avoid closing them like r1913534
did.
* modules/proxy/proxy_util.c:
Rename the struct to remote_connect_info since it's only used for
connecting through remote CONNECT proxies. Axe the use_http_connect
field, always true.
* modules/proxy/proxy_util.c(ap_proxy_connection_reusable):
Remote CONNECT (forward) proxy connections can be reused if the auth
and origin server infos are the same, so conn->forward != NULL is not
a condition to prevent reusability.
* modules/proxy/proxy_util.c(ap_proxy_determine_connection):
Fix the checks around conn->forward reuse and connection cleanup if
that's not possible.
Submitted by: jfclere, ylavic
GH: closes#531
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1925743 13f79535-47bb-0310-9956-ffa450edef68
Multipath TCP (MPTCP), standardized in RFC8684 [1],
is a TCP extension that enables a TCP connection to
use different paths.
Multipath TCP has been used for several use cases.
On smartphones, MPTCP enables seamless handovers between
cellular and Wi-Fi networks while preserving established
connections. This use-case is what pushed Apple to use
MPTCP since 2013 in multiple applications [2]. On dual-stack
hosts, Multipath TCP enables the TCP connection to
automatically use the best performing path, either IPv4
or IPv6. If one path fails, MPTCP automatically uses
the other path.
To benefit from MPTCP, both the client and the server
have to support it. Multipath TCP is a backward-compatible
TCP extension that is enabled by default on recent
Linux distributions (Debian, Ubuntu, Redhat, ...). Multipath
TCP is included in the Linux kernel since version 5.6 [3].
To use it on Linux, an application must explicitly enable
it when creating the socket. No need to change anything
else in the application.
Adding the possibility to create MPTCP sockets would thus
be a really fine addition to httpd, by allowing clients
to make use of their different interfaces.
This patch introduces the possibilty to connect to backend
servers using MPTCP. Note however that these changes are
only available on Linux, as IPPROTO_MPTCP is Linux specific
for the time being.
For proxies, we can connect using MPTCP by passing the
\"multipathtcp\" parameter:
ProxyPass \"/example\" \"http://backend.example.com\" multipathtcp=On
We then store this information in the worker and create sockets
appropriately according to this value.
Link: https://www.rfc-editor.org/rfc/rfc8684.html [1]
Link: https://www.tessares.net/apples-mptcp-story-so-far/ [2]
Link: https://www.mptcp.dev [3]
Add Multipath TCP (MPTCP) support (Core)
Multipath TCP (MPTCP), standardized in RFC8684 [1],
is a TCP extension that enables a TCP connection to
use different paths.
Multipath TCP has been used for several use cases.
On smartphones, MPTCP enables seamless handovers between
cellular and Wi-Fi networks while preserving established
connections. This use-case is what pushed Apple to use
MPTCP since 2013 in multiple applications [2]. On dual-stack
hosts, Multipath TCP enables the TCP connection to
automatically use the best performing path, either IPv4
or IPv6. If one path fails, MPTCP automatically uses
the other path.
To benefit from MPTCP, both the client and the server
have to support it. Multipath TCP is a backward-compatible
TCP extension that is enabled by default on recent
Linux distributions (Debian, Ubuntu, Redhat, ...). Multipath
TCP is included in the Linux kernel since version 5.6 [3].
To use it on Linux, an application must explicitly enable
it when creating the socket. No need to change anything
else in the application.
Adding the possibility to create MPTCP sockets would thus
be a really fine addition to httpd, by allowing clients
to make use of their different interfaces.
This patch introduces the possibility to listen with MPTCP
sockets. Note however that these changes are only available
on Linux, as IPPROTO_MPTCP is Linux specific for the time being.
To do so, we extended the Listen directive to include
a \"multipathtcp\" option, allowing to create MPTCP sockets
instead of regular TCP ones:
Listen 80 options=multipathtcp
We then store this information in flags for the listen directive
and create sockets appropriately according to this value.
Link: https://www.rfc-editor.org/rfc/rfc8684.html [1]
Link: https://www.tessares.net/apples-mptcp-story-so-far/ [2]
Link: https://www.mptcp.dev [3]
Submitted by: Aperence <anthony.doeraene hotmail.com>
Github: closes#476
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1920586 13f79535-47bb-0310-9956-ffa450edef68
With "ProxyPassMatch ^/([^/]+)/(.*)$ https://$1/$2", ap_proxy_get_worker_ex()
should not consider the length of scheme://host part of the given URL because
of the globbing match on the host part.
Fix it by setting worker->s>is_host_matchable when creating a worker with host
substitution and avoiding the min_match check in worker_matches() in this case.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1919617 13f79535-47bb-0310-9956-ffa450edef68
Using "unix:/udspath|scheme:" or "unix:/udspath|scheme://" for a ProxyPass URL
does not work currently, while it works for SetHandler "proxy:unix:...".
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1919533 13f79535-47bb-0310-9956-ffa450edef68
in <Location> (incomplete fix in 2.4.62). PR 69160.
When SetHandler "unix:..." is used in a <Location "/path"> block, the path
gets appended (including $DOCUMENT_ROOT somehow) to r->filename hence the
current checks in fixup_uds_filename() to add "localhost" when missing don't
work. Fix them.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1919532 13f79535-47bb-0310-9956-ffa450edef68
ap_proxy_canon_netloc() called from canon_handler hooks modifies its given
url in pace, hence &r->filename[6] passed from ap_proxy_canon_url().
This is not an issue if every canon_handler hook succeeds (or declines)
since r->filename is usually completely rewritten finally, but on failure
it gets truncated.
Avoid this by passing a copy of r->filename from the start, the proxy *url
and r->filename don't need to point to the same data.
* proxy/proxy_util.c(ap_proxy_canon_url):
Pass a copy of r->filename to the canon_handler hooks.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1919023 13f79535-47bb-0310-9956-ffa450edef68
"balancer:" URLs set via SetHandler, also allowing for "unix:"
sockets with BalancerMember(s). PR 69168.
* modules/proxy/proxy_util.h, modules/proxy/proxy_util.c:
Move proxy_interpolate() from mod_proxy.c to ap_proxy_interpolate(),
exported locally only (non public).
Move proxy_fixup() from mod_proxy.c to ap_proxy_canon_url(), exported
locally only too (non public).
Rollback ap_proxy_fixup_uds_filename() to a local fixup_uds_filename()
usable from proxy_util.c only. The public function will be removed in
a following commit.
* modules/proxy/mod_proxy.h:
Note that ap_proxy_fixup_uds_filename() is deprecated.
* modules/proxy/mod_proxy.c:
Just use ap_proxy_canon_url() from proxy_fixup() and proxy_handler()
for SetHandler URLs.
* modules/proxy/mod_proxy_balancer.c:
Do not canonicalize the path from proxy_balancer_canon() anymore but
rather from balancer_fixup() where the balancer URL is rewritten to
the BalancerMember URL.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1919022 13f79535-47bb-0310-9956-ffa450edef68
The hostname part of the URL is not mandated for UDS though the canon_handler
hooks will require it, so add "localhost" if it's missing (won't be used anyway
for an AF_UNIX socket).
This can trigger with SetHandler "unix:" URLs which are now also fixed up.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1919015 13f79535-47bb-0310-9956-ffa450edef68
* modules/proxy/proxy_util.c:
Export ap_proxy_fixup_uds_filename() from fix_uds_filename.
Call it from ap_proxy_pre_request() even for rewritten balancer workers.
* modules/proxy/mod_proxy.h:
Declare ap_proxy_fixup_uds_filename()
* modules/proxy/mod_proxy.c:
Fixup UDS filename from r->handler in proxy_handler().
* include/ap_mmn.h:
Bump MMN minor for ap_proxy_fixup_uds_filename()
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1918626 13f79535-47bb-0310-9956-ffa450edef68
apr_socket_connect() on unixes does copy the passed in *addr, so limit the
liefetime workaround to Windows and OS/2 only (which don't).
* modules/proxy/proxy_util.c(ap_proxy_determine_address):
#ifdef the relevant code for WIN32/OS2 only, and improve comment.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1918438 13f79535-47bb-0310-9956-ffa450edef68
ap_proxy_connect_backend() will use the first conn->addr[->next] that works, so
the current address alive can be any of them.
* modules/proxy/proxy_util.c(ap_proxy_determine_address):
Loop for all conn->addr[->next] to determine if addr_alive.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1918429 13f79535-47bb-0310-9956-ffa450edef68
* modules/proxy/proxy_util.c(address_cleanup):
Rename to conn_cleanup() since it also closes the socket, and run
socket_cleanup() first to avoid dangling conn->sock->remote_addr.
* modules/proxy/proxy_util.c(ap_proxy_determine_address):
Compare the new address with the old one and keep the socket alive
if it did not change.
Follow up to r1918410.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1918412 13f79535-47bb-0310-9956-ffa450edef68
Even if they are not part of the API (not in mod_proxy.h) hence requires no
MMN bump, {get,set,increment_,decrement_}busy_count() being AP_PROXY_DECLARE()d
could name-collide with a third-party module's functions.
Rename them using the ap_proxy_ prefix, with an underscore after the verb for
for all of them too (for consistency), that is:
ap_proxy_{get,set,increment,decrement}_busy_count()
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1913930 13f79535-47bb-0310-9956-ffa450edef68
Use the correct fwd_pool for allocating the forward_info when the connection
is reusable as spotted by Rüdiger.
Do not reuse conn->forward if the ->proxy_auth changed.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1913534 13f79535-47bb-0310-9956-ffa450edef68
Factorize duplicated code in the balancer and non-balancer cases by adding
a new worker_matches() helper.
No functional change intended.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1912462 13f79535-47bb-0310-9956-ffa450edef68
The latter requires a pool and returns a non constant string although it may
return worker shared data.
By computing the worker "UDS" name at init time we can return a constant name
in any case with no need for a pool, that's the new ap_proxy_worker_get_name().
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1912461 13f79535-47bb-0310-9956-ffa450edef68
proxy_connection_create() and ap_proxy_connect_backend() sometimes close the
connection on failure, sometimes not. Always close it.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1912460 13f79535-47bb-0310-9956-ffa450edef68
Define a new proxy_address struct holding the current/latest sockaddr in use
by each proxy worker and conn. Since backend addresses can be updated when
their TTL expires and while connections are being processed, each address is
refcounted and freed only when the last worker (or conn) using it grabs the
new one.
The lifetime of the addresses is handled at a single place by the new
ap_proxy_determine_address() function. It guarantees to bind the current/latest
backend address to the passed in conn (or do nothing if it's up to date already).
The function is called indirectly by ap_proxy_determine_connection() for the
proxy modules that use it, or directly by mod_proxy_ftp and mod_proxy_hcheck.
It also is called eventually by ap_proxy_connect_backend() when connect()ing all
the current addresses fails, to check (PROXY_DETERMINE_ADDRESS_CHECK) if some
new addrs are available.
This commit is also a rework of the lifetime of conn->addr, conn->hostname
and conn->forward, using the conn->uds_pool and conn->fwd_pool for the cases
where the backend is connected through a UDS socket and a remote CONNECT proxy
respectively.
* include/ap_mmn.h:
Minor bump for new function/fields.
* modules/proxy/mod_proxy.h (struct proxy_address,
ap_proxy_determine_addresss()):
Declare ap_proxy_determine_addresss() and opaque struct proxy_address,
new fields to structs proxy_conn_rec/proxy_worker_shared/proxy_worker.
* modules/proxy/mod_proxy.c (set_worker_param):
Parse/set the new worker->address_ttl parameter.
* modules/proxy/proxy_util.c (proxy_util_register_hooks(),
ap_proxy_initialize_worker(),
ap_proxy_connection_reusable(),
ap_proxyerror(), proxyerror_core(),
init_conn_pool(), make_conn_subpool(),
connection_make(), connection_cleanup(),
connection_constructor()):
Initialize *proxy_start_time in proxy_util_register_hooks() as the epoch
from which expiration times are relative (i.e. seconds stored in an uint32_t
for atomic changes).
Make sure worker->s->is_address_reusable and worker->s->disablereuse are
consistant in ap_proxy_initialize_worker(), thus no need to check for both
in ap_proxy_connection_reusable().
New proxyerror_core() helper taking an apr_status_t to log, wrap in
ap_proxyerror().
New make_conn_subpool() to create worker->cp->{pool,dns} with their own
allocator.
New connection_make() helper to factorize code in connection_cleanup() and
connection_constructor().
* modules/proxy/proxy_util.c (proxy_address_inc(), proxy_address_dec(),
proxy_address_cleanup(), proxy_address_set_expired(),
worker_address_get(), worker_address_set(),
worker_address_resolve(), proxy_addrs_equal(),
ap_proxy_determine_address(),
ap_proxy_determine_connection(),
ap_proxy_connect_backend()):
Implement ap_proxy_determine_address() using the above helpers for atomic changes,
and call it from ap_proxy_determine_connection() and ap_proxy_connect_backend().
* modules/proxy/mod_proxy_ftp.c (proxy_ftp_handler):
Use ap_proxy_determine_address() and use the returned backend->addr.
* modules/proxy/mod_proxy_hcheck.c (hc_determine_connection, hc_get_backend,
hc_init_worker, hc_watchdog_callback):
Use ap_proxy_determine_address() in hc_determine_connection() and call the
latter from hc_get_backend(), replace hc_init_worker() by hc_init_baton()
which now calls hc_get_hcworker() and hc_get_backend() to resolve the first
address at init time.
* modules/proxy/mod_proxy_http.c (proxy_http_handler):
Use backend->addr and ->hostname instead of worker->cp->addr and
worker->s->hostname_ex respectively.
* modules/proxy/mod_proxy_ajp.c (ap_proxy_ajp_request):
Use backend->addr and ->hostname instead of worker->cp->addr and
worker->s->hostname_ex respectively.
Closes#367
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1912459 13f79535-47bb-0310-9956-ffa450edef68
described in RFC 8441. A new directive 'H2WebSockets on|off' has been
added. The feature is by default not enabled.
As also discussed in the manual, this feature should work for setups
using "ProxyPass backend-url upgrade=websocket" without further changes.
Special server modules for WebSockets will have to be adapted,
most likely, as the handling if IO events is different with HTTP/2.
HTTP/2 WebSockets are supported on platforms with native pipes. This
excludes Windows.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1910507 13f79535-47bb-0310-9956-ffa450edef68
we can have decoded '%''s in the URI that got sent to us in the original URL
as %25. Don't error out in this case but just fall through and have them
encoded to %25 when forwarding to the backend.
PR: 66580
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1909464 13f79535-47bb-0310-9956-ffa450edef68
might be caused by a change on DNS side. Try another DNS lookup in this case
and in case this causes a successful connection trigger a refresh of the
worker lookup cache.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1909451 13f79535-47bb-0310-9956-ffa450edef68
we need to avoid a race that worker->cp->addr switches to NULL after we
checked it to be non NULL but before we assign it to conn->addr in an else
tree which would leave conn->addr to NULL and likely cause a segfault later.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1909400 13f79535-47bb-0310-9956-ffa450edef68
In case that AllowEncodedSlashes is set to NoDecode do not double encode
encoded slashes in the URL sent by the reverse proxy to the backend.
* include/ap_mmn.h: Document the addition of ap_proxy_canonenc_ex to the API.
* modules/proxy/mod_proxy.h: Declare ap_proxy_canonenc_ex and define flag
values.
* modules/proxy/proxy_util.c: Implement ap_proxy_canonenc_ex by modifying
ap_proxy_canonenc accordingly and reimplement ap_proxy_canonenc to
use ap_proxy_canonenc_ex with the appropriate flag.
* modules/http2/mod_proxy_http2.c, modules/proxy/mod_proxy_*.c: Set the
correct flag based on the AllowEncodedSlashes configuration and use
ap_proxy_canonenc_ex instead of ap_proxy_canonenc.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1908341 13f79535-47bb-0310-9956-ffa450edef68
some dollar substitution (backreference) happens in the hostname
or port part of the URL.
Address or connection reuse can't work when the autority part of the URL is
dynamic (single origin server[:port] handled/assumed in the reslist). Detect
such cases and unset worker->s->is_address_reusable to disable reuse regardless
of enablereuse/disablereuse.
* modules/proxy/proxy_util.c(ap_proxy_define_worker_ex):
Lookup for $n substitution in the hostname[:port] when parsing the URL and
if present, set worker->->is_address_reusable=0 / worker->s->disablereuse=1.
* modules/proxy/proxy_util.c(ap_proxy_initialize_worker):
Don't overwrite worker->s->is_address_reusable from enablereuse/disablereuse
parameters, and set both consistently.
* docs/manual/mod/mod_proxy.xml:
Add ProxyPassMatch compatibility note about key=value parameters handled with
$n substitutions since 2.4.47.
Document the specificities of enablereuse/disablereuse w.r.t. $n subsitutions
in the different part of the URL.
Axe the note about unparsable URLs when the $n substitution happens in the
port, this has been addressed in 2.4.47 too (and works now).
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1904513 13f79535-47bb-0310-9956-ffa450edef68
If proxy_run_fixups() sets a Host header there will be two ones sent to the
origin server.
Instead, let the hooks know about the Host by setting it in the r->headers_in
passed to proxy_run_fixups(), and use the actual value afterwards.
Note: if proxy_run_fixups() unsets the Host we'll keep ours.
Suggested by: rpluem
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1901485 13f79535-47bb-0310-9956-ffa450edef68
In 2.4.x, the copy of r->headers_in is left in r->headers_in for the whole
function, while the original r->headers_in are restored at the end. This
is simpler and avoids the r->headers_in <=> saved_headers_in danse when
calling a function that modifies r->headers_in in place.
Align with 2.4.x, no functional change.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1901460 13f79535-47bb-0310-9956-ffa450edef68
Let proxy_http_handler() tell ap_proxy_create_hdrbrgd() whether to add or
preserve Expect header or not, through the "proxy-100-continue" note.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1901446 13f79535-47bb-0310-9956-ffa450edef68
Stop returning 417 when mod_proxy has to forward an HTTP/1.1 request with both
"Expect: 100-continue" and "force-proxy-request-1.0" set, mod_proxy can instead
handle the 100-continue by itself before forwarding the request, like in the
"Proxy100Continue Off" case.
Note that this does not change the behaviour of httpd receiving an HTTP/1.0
request with an Expect header, ap_check_request_header() will still correctly
return 417 in this case.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1901420 13f79535-47bb-0310-9956-ffa450edef68
- adds new meta bucket types REQUEST, RESPONSE and HEADERS to the API.
- adds a new method for setting standard response headers Date and Server
- adds helper methods for formatting parts of HTTP/1.x, like headers and
end chunks for use in non-core parts of the server, e.g. mod_proxy
- splits the HTTP_IN filter into a "generic HTTP" and "specific HTTP/1.x"
filter. The latter one named HTTP1_BODY_IN.
- Uses HTTP1_BODY_IN only for requests with HTTP version <= 1.1
- Removes the chunked input simulation from mod_http2
- adds body_indeterminate flag to request_rec that indicates that a request
body may be present and needs to be read/discarded. This replaces logic
that thinks without Content-Length and Transfer-Encoding, no request
body can exist.
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1899547 13f79535-47bb-0310-9956-ffa450edef68