TCP has an "urgent data" feature that might have been used for this kind of thing, used for Ctrl-C in telnet, etc. It can be used to bypass any pending send buffer and received by the server ahead of any unread data.
Just googling it now and TCP urgent data seems to be a mess.
Reading the original RFC 793 it's clear that the intention was never for this to be OOB data, but to inform the receiver that they should consume as much data as possible and minimally process it / buffer it locally until they have read up to the urgent data.
However, the way it was historically implemented as OOB data seems to be significantly more useful - you could send flow control messaging to be processed immediately even if you knew the receiving side had a lot data to consume before it'd see an inline message.
It seems nowadays the advice is just to not use urgent data at all.
Unfortunately the can be many buffers between you and the server which "urgent data" doesn't skip by design. (the were also lots of implementation problems)
A good write up explaining how assumptions of network and security design have changed so much over the years. Also you have to give credit nowadays for not overly sensationalizing 'heebie-jeebies level 6'. I certainly continue reusing a connection I assumed was TLS after a cancel so was vulnerable to a DoS; but equally if the next statement was canceled I would switch to a new connection no harm no foul.
In general I love postgres. There are to problems with postgresql in my book: the protocol (proto3) and no great way to directly query using a different language.
The protocol has no direct in-protocol cancellation, like TDS has. TDS does this by making a framed protocol, at the application protocol level it can cancel queries. It has two variants (text and binary) and can cause fragmentation, and at the query and protocol level only supports positional parameters, no named parameters.
One a query is on the server, it doesn't support directly acting on a language mode. I don't want to go into SQL mode and create a PL/SQL proc, I just want direct PL/SQL. Can't (really) do that well. Directly returning multiple result sets (eg for a matrxi, separate rows, columns, and fields) or related queries in a single round trip is technically possible, but hard to do. So frustrating.
I think I can understand why this wasn’t addressed for so long: in the vast majority of cases if your db is exposed on a network level to untrusted sources, then you probably have far bigger problems?
That's the kind of hand-wave that turns into a CVE later. Network exposure is one thing, but weird signal handling in local tooling can still become a cross-session bug or a nasty security footgun on shared infra, terminals, or jump boxes.
If you have shared psql sessions in tmux or on a jump box one bad cancel can trash someone else's work. 'Just firewall it' is how you end up owned by the intern with shell access.
it's also very tricky to do given the current architecture on the server side where one single-threaded process handles the connection and uses (for all intents and purposes) sync io.
In such a scenario, listening (and acting) on cancellation requests on the same connection becomes very hard, so fixing this goes way beyond "just".
From the title I was hoping for this being hacky on the server application side, like how it aborts and clears the memory for a running query.
Still an interesting read. Just wondering, why can't the TCP connection of the query not be used to send a cancellation request? Why does it have the be out of band?
Because Postgres is a very old codebase and was written in a style that assumes there are no threads, and thus there's nothing to listen for a cancellation packet whilst work is getting done. A lot of UNIXes had very poor support for threads for a long time and so this kind of multi-process architecture is common in old codebases.
The TCP URG bit came out of this kind of problem. It triggers a SIGURG signal on UNIX which interrupts the process. Oracle works this way.
These days you'd implement cancellation by having one thread handle inbound messages and another thread do the actual work with shared memory to implement a cooperative cancellation mechanic.
But we should in general have sympathy here. Very little software and very few protocols properly implements any form of cancellation. HTTP hardly does for normal requests, and even if it did, how many web servers abort request processing if the connection drops?
“it is strongly recommended that applications do not employ urgent indications. Nevertheless, urgent indications are still retained as a mandatory part of the TCP protocol to support the few legacy applications that employ them. However, it is expected that even these applications will have difficulties in environments with middleboxes.”
> These days you'd implement cancellation by having one thread handle inbound messages and another thread do the actual work with shared memory to implement a cooperative cancellation mechanic.
Doesn't necessarily need a thread per connection. Could be on an epoll/kqueue/io-uring.
The query would need to periodically re-check a cancellation flag, which has costs and would come with a delay if it's particularly busy.
It isn't really easy to do. A client may send tons of data over the connection, probably data which is calculated by the client as the client's buffer empties. If the server clears the buffers all the time to check for a cancellation it may have quite bad consequences.
I don't know much about postgres, but as I understand it, it's a pretty standard server application. Read a request from the client, work on the request, send the result, read the next request.
Changing that to poll for a cancellation while working is a big change. Also, the server would need to buffer any pipelined requests while looking for a cancellation request. A second connection is not without wrinkles, but it avoids a lot of network complexity.
It's basically got a thread per connection, while it's working on a query that thread isn't listening to incoming traffic on the network socket any more.
Because then the cancellation request would get queued behind any other data that's in flight from the client to the server. In the worst case the TCP buffers are full, and the client cannot even send the request until the server processes some of the existing data that's in-flight.
What strikes me most about this is how it illustrates the tension between backward compatibility and security in long-lived systems. The cancel key approach made total sense in the context of early Unix networking assumptions, but those assumptions have quietly eroded over decades. The fact that the cancel token is only 32 bits of entropy and sent in cleartext means it was never really designed for adversarial environments -- it was a convenience feature that became load-bearing infrastructure. I wonder if the Postgres community will eventually move toward a multiplexed protocol layer (similar to what HTTP/2 did for HTTP) rather than trying to bolt security onto the existing out-of-band mechanism.
Doesn't it also make sense in the context of modern networking assumptions?
I've never had to connect to PostGres in an adversarial environment. I've been at work or at home and I connected to PostGres instances owned by me or my employer. If I tried to connect to my work instance from a coffee shop, the first thing I'd do would be to log in to a VPN. That's your multiplexed protocol layer right there: the security happens at the network layer and your cancel happens at the application layer.
This is a different situation from websites. I connect to websites owned by third parties all the time, and I want my communication there to be encrypted at the application layer.
Zero trust security which is becoming increasingly common is based on removing the internal / external network dichotomy entirely. Everything should be assumed to be reachable from the open internet (so SSO, OIDC everywhere.)
It makes me think of ipsec, ipsec was originally intended to be used sort of the the same as we use tls today, but application independent. when making a connection to a random remote machine the os would see if it could spool up a ipsec sa. No changes to a user program would be needed. But while they were faffing about trying to overcomplicate ipsec ssl came along and stole it's lunch money.
This application of ipsec was never used and barely implemented. Today getting it to make ad-hoc connections is a tricky untested edge case and ipsec was regulated to dedicated tunnels. Where everyone hates it because it is too tricky to get the parameters aligned.
There is definitely a case to be made that it is right and proper that secure connections are handled in the application(tls), But sometimes I like to think of how it could have been. where all applications get a secure connection whether they want one or not.
As a useless dangling side thought, an additional piece would be needed for ad-hoc ipsec that as far as I know was never implemented, a way to notify the OS that this connection must be encrypted(a socket option? SO_ENC?). This is most of the case for encrypted connections being the duty of the application.
I'm fairly certain that this cancellation approach has nothing to do with UNIX networking assumptions, and everything to do with the connection/process model of PostgreSQL.
Creating a connection => starting a process and passing the accepted socket to it (so in-band cancel would have to go directy to the backend executing the query) + single-threaded backend process not reading from socket when executing a query, so it would get the cancellation request only after the query finishes (or even after all pipelined queries before it finish, which is even worse).
offtopic, but it's interesting how large of a discrepancy there is between the length of your comment and how much time i'd have to spend explaining background info to a non-programmer to get them to understand why this is funny
TLS is not async signal safe. But having a dedicated thread whose responsibility is to only send cancel tokens via a TLS connection and is woken up by a posix semaphore seems a small, self contained change that doesn't require any major refactoring.
This doesn't really have anything to do with async signal safety. You could perfectly fine capture the Ctrl+C in psql, stick a notifier byte in a pipe (or use signalfd to begin with) and handle it as a synchronous event in a main loop. You'd still need to establish a new connection purely to bypass buffered data. (or use TCP URG, but that seems generally a poor idea.)
I believe the suggestion is to have a TLS endpoint in the server, which demultiplexes the incoming CancelRequest and signals to the corresponding worker process via shared memory
Reading the original RFC 793 it's clear that the intention was never for this to be OOB data, but to inform the receiver that they should consume as much data as possible and minimally process it / buffer it locally until they have read up to the urgent data.
However, the way it was historically implemented as OOB data seems to be significantly more useful - you could send flow control messaging to be processed immediately even if you knew the receiving side had a lot data to consume before it'd see an inline message.
It seems nowadays the advice is just to not use urgent data at all.
The downside is that sometimes connections are proxied in ways that lose these unusual packets. Looking at you, Docker...
The protocol has no direct in-protocol cancellation, like TDS has. TDS does this by making a framed protocol, at the application protocol level it can cancel queries. It has two variants (text and binary) and can cause fragmentation, and at the query and protocol level only supports positional parameters, no named parameters.
One a query is on the server, it doesn't support directly acting on a language mode. I don't want to go into SQL mode and create a PL/SQL proc, I just want direct PL/SQL. Can't (really) do that well. Directly returning multiple result sets (eg for a matrxi, separate rows, columns, and fields) or related queries in a single round trip is technically possible, but hard to do. So frustrating.
If you have shared psql sessions in tmux or on a jump box one bad cancel can trash someone else's work. 'Just firewall it' is how you end up owned by the intern with shell access.
In such a scenario, listening (and acting) on cancellation requests on the same connection becomes very hard, so fixing this goes way beyond "just".
Still an interesting read. Just wondering, why can't the TCP connection of the query not be used to send a cancellation request? Why does it have the be out of band?
The TCP URG bit came out of this kind of problem. It triggers a SIGURG signal on UNIX which interrupts the process. Oracle works this way.
These days you'd implement cancellation by having one thread handle inbound messages and another thread do the actual work with shared memory to implement a cooperative cancellation mechanic.
But we should in general have sympathy here. Very little software and very few protocols properly implements any form of cancellation. HTTP hardly does for normal requests, and even if it did, how many web servers abort request processing if the connection drops?
I don't think I have ever seen a published web service which error log wasn't full of broken pipe messages. So, AFAIK, all.
https://datatracker.ietf.org/doc/html/rfc6093:
“it is strongly recommended that applications do not employ urgent indications. Nevertheless, urgent indications are still retained as a mandatory part of the TCP protocol to support the few legacy applications that employ them. However, it is expected that even these applications will have difficulties in environments with middleboxes.”
Doesn't necessarily need a thread per connection. Could be on an epoll/kqueue/io-uring.
The query would need to periodically re-check a cancellation flag, which has costs and would come with a delay if it's particularly busy.
Changing that to poll for a cancellation while working is a big change. Also, the server would need to buffer any pipelined requests while looking for a cancellation request. A second connection is not without wrinkles, but it avoids a lot of network complexity.
https://learn.microsoft.com/en-us/openspecs/windows_protocol...
At the receiver, a signal handler must be used, which will be invoked when an urgent packet is received, with SIGURG.
I've never had to connect to PostGres in an adversarial environment. I've been at work or at home and I connected to PostGres instances owned by me or my employer. If I tried to connect to my work instance from a coffee shop, the first thing I'd do would be to log in to a VPN. That's your multiplexed protocol layer right there: the security happens at the network layer and your cancel happens at the application layer.
This is a different situation from websites. I connect to websites owned by third parties all the time, and I want my communication there to be encrypted at the application layer.
This application of ipsec was never used and barely implemented. Today getting it to make ad-hoc connections is a tricky untested edge case and ipsec was regulated to dedicated tunnels. Where everyone hates it because it is too tricky to get the parameters aligned.
There is definitely a case to be made that it is right and proper that secure connections are handled in the application(tls), But sometimes I like to think of how it could have been. where all applications get a secure connection whether they want one or not.
As a useless dangling side thought, an additional piece would be needed for ad-hoc ipsec that as far as I know was never implemented, a way to notify the OS that this connection must be encrypted(a socket option? SO_ENC?). This is most of the case for encrypted connections being the duty of the application.
heroku's postgres database service still exposes itself on the public internet.
https://www.postgresql.org/docs/current/protocol-message-for... / BackendKeyData
I'm fairly certain that this cancellation approach has nothing to do with UNIX networking assumptions, and everything to do with the connection/process model of PostgreSQL.
Creating a connection => starting a process and passing the accepted socket to it (so in-band cancel would have to go directy to the backend executing the query) + single-threaded backend process not reading from socket when executing a query, so it would get the cancellation request only after the query finishes (or even after all pipelined queries before it finish, which is even worse).
My keyboard has -.