It looks like Oracle shared server process model is better than dedicated server process model. In a shared server configuration, client user processes connect to a dispatcher and it can support multiple client connections concurrently (source). Also, shared server can be configured for connection pooling and session multiplexing, so all these things should bring a huge performance boost. So, with these advantages, are there any cases when dedicated server process model should be used instead?
When dedicated server process model should be used instead of shared process model
oracle
Related Solutions
As it turns out, the ODA is factory configured with active-backup bonds. I've tested this to work well without any switch-side LACP/EtherChannel configuration, and each bonded connection may be split across two switches. In my tests, no simulated failure or network reconfiguration caused more than a a few hundred milliseconds worth of network outage.
This means that one can set up an isolated redundant front network for web applications using any layer two switches that are not inherently redundant.
To avoid client connections taking the long way into the company network and back through the other switch (and thus making production dependent on that equipment), one can have a private VLAN that only lives on the two edge switches and on an EtherChannel trunk between them.
As such, only the application servers and the database appliance will exist on that virtual network segment.
I don't see a way to control which path the connections from the application servers take to the database listeners, so the link between the two switches will have to be redundant, less this link becomes a single point of failure. This rules out using unmanaged switches without support for VLAN and either LACP or STP.
Using Cisco Catalyst 2960-series switches, I believe a combination of EtherChannel and Port Fast would be the better choice for a solid independent connection between the two. I would also use Port Fast on the ports for all the bonded connections to ODA and application servers.
Since the production network is isolated, one would need separate network connections for management, backup and connectivity to the rest of the company network.
Naturally, in order for this front production network to be fully self contained, any dependencies to external resources, such as DNS or authentication services, must also be resolved. Ideally production would be able to continue independently, without regard to any faults, ongoing maintenance or network outages anywhere else in the data center or company network.
You'd use a hash value as an identifier for rows of data whenever that's "practical", i.e. you actually need a simple identifier for all your rows, there are no candidate (natural) key fields that are practical (too wide for example), and an ordinary (sequential for instance) generated identifier doesn't cut it (for instance you need that row identifier to be "global" - if the same row was created in two distinct databases, they should have the same identifier).
(One non-trivial example of such a thing would be Git. Each object stored in a git repository is uniquely identified by a SHA-1 hash, which is pretty handy to refer unambiguously to a given commit.)
If you need to use something like that, indeed you'll need to create it yourself with Oracle (adding that hash as a column to the table and indexing it, or with a function-based index).
You could go even further by creating your very own index type with application domain indexes if a plain hash isn't good enough – full-text indexing is, I believe, implemented this way.
There is something built-in to Oracle that is hash-based and doesn't actually need an index to get fast row retrieval: hash clusters. You can theoretically retrieve the target row with as little as a single-block I/O (which normal index+table lookups can't match, and even IOTs can't match unless the table is really small). Do read the When to Use Hash Clusters docs though, they are quite peculiar, and you need a good key (one or more columns) in the first place to use them.
Best Answer
1) One Tom Kyte maintains that with shared server, the only performance that will surely increase is processing of CONNECTs. For the rest of statements, shared server is inherently a bit slower, unless it happens that pooling/multiplexing is sufficient to maintain advantage.
2) Dedicated is simpler - it has less components. Since much of administrator's job is investigating strange behaviors, tracing, analyzing... simpler means better.
3) Most importantly, everybody uses dedicated, and a best approach in case of any "enterprise-level" software is to take a path more traveled by - my personal opinion. A side-effect would be when contacting Oracle support with problems, you may receive more often the advice to "try again using dedicated server instead" than "well, why don't you try it using shared server instead". So it's one less thing to discuss with those beautiful people.