First of all, keep in mind that work_mem is per operation and so it can get excessive pretty quickly. In general if you are not having trouble with sorts being slow I would leave work_mem alone until you need it.
Looking at your query plans, one thing that strikes me is that the buffer hits are very different looking at the two plans, and that even the sequential scans are slower. I suspect that the issue has to do with read-ahead caching and having less space for that. What this means is you are biasing memory for re-use of indexes and against reading tables on disk.
My understanding is that PostgreSQL will look to the cache for a page before reading it from disk because it doesn't know really whether the OS cache will contain that page. Because the pages are then staying in the cache and because that cache is slower than the OS cache, this changes the sorts of queries which are fast vs the sorts that are slow. In fact reading the plans, aside from work_mem issues, it looks like all of your query info comes from the cache but it is a question of which cache.
work_mem: how much memory we can allocate for a sort or related join operation. This is per operation, not per statement or per back-end, so a single complex query can use many times this amount of memory. It isn't clear you are hitting this limit but it is worth noting and being aware of. if you increase this too far, you lose memory that might be available for the read cache and the shared buffers.
shared_buffers: how much memory to allocate to the actual PostgreSQL page queue. Now, ideally the interesting set of your database will stay in memory cached here and in the read buffers. However, what this does is ensure that the most frequently used information across all backends gets cached and not flushed to disk. On Linux this cache is significantly slower than the OS disk cache, but it offers guarantees that the OS disk cache dos not and is transparent to PostgreSQL. This is pretty clearly where your problem is.
So what happens is that when we have a request, we check the shared buffers first since PostgreSQL has deep knowledge of this cache, and look for the pages. If they are not there we ask the OS to open them from the file, and if the OS has cached the result it returns the cached copy (this is faster than the shared buffers, but Pg can't tell whether it is cached or on disk, and disk is much slower so PostgreSQL typically will not take that chance). Keep in mind this affects random vs sequential page access as well. So you may get better performance with lower shared_buffers settings.
My gut sense is that you probably get better, or at least more consistent, performance in high concurrency environments with larger shared_buffer settings. Also keep in mind that PostgreSQL grabs this memory and holds it so if you have other things running on the system, the read buffers will hold files read by other processes. It's a very large and complex topic. Larger shared buffer settings provide better guarantees of performance but may deliver less performance in some cases.
I think your best option is to use your favorite OS scheduling tool to run ANALYZE <table>
explicitly and pre-emptively at the time of your choosing, say at 3 a.m.
PostgreSQL's autovacuum and autoanalyze are activity-counter driven. This has the unfortunate effect that the time when the activity counters cross the thresholds is mostly likely to be exactly when the database is most active, which is when you least want those maintenance tasks to run. By running the task preemptively during a off-peak time you will reset the activity counters, so that the auto-versions rarely will find work to do. But in case of something going wrong, they are still there to save your bacon.
Will the query planner eventually suffer from not having auto analyze running on a table that never gets updates or deletes?
This can't be answered with the information you provide. If that table gets nothing but single-row lookups via a unique index, for example, out-of-date statistics are unlikely to be a problem. If it gets complicated analytics queries, it is likely to be much more of a problem.
Best Answer
In my opinion changing the default of jit to 'on' in v12 was a mistake, so based on that opinion turning it back off again seems pretty natural to me. I would have objected to the change if I had been paying more attention at the time (which doesn't mean I would have prevailed, of course). The change seems to harm at least as many people as it helps. And the people who were looking forward to the feature could surely be counted on to go change the default.
I think you have far too much respect for the defaults in general. They are basically set for the minimal credible machine size. Even the settings that were well chosen in general may not be well chosen for you.
Do you have example of such a query? Generally by the point the query is long-running enough that jit would make sense, it also starts touching enough data that it is no longer CPU bound and so that using jit or not is relatively unimportant (or it is missing an index, which the fixing of is far more important than jit). This is not universally true of course, and if you have an example you could then possibly use that to target the setting of jit_above_cost. (One problem with the jit_above* things is that they are based on total cost estimates, which are generally dominated by IO cost estimates. But choosing to use JIT or not based on estimated IO costs doesn't make sense. But the infrastructure isn't in place to separate out those types of cost estimates.)