The alternative to using database links would be to create different connections in your SSIS package for each database and to use those connections to pull data from the remote database to the SSIS server and then push that data into the target database. It is highly unlikely, however, that this would lead to performance improvements. Most likely, it would mean that your data is spending much more time going across additional network links.
Materialized views would not be a replacement for database links. Materialized views would use database links. They would replace some or all of your SSIS package.
Why do you believe that the database links are the source of your performance problems? Is your SSIS package causing Oracle to do set-based operations to pull data from the remote server to the local server? Or is SSIS doing row-based operations where individual rows are being extracted from the remote database and loaded into the local database. Row-based operations are inherently much slower than set-based operations particularly when data is being sent over the network. SSIS generally makes it very easy to build pretty-looking row-by-row ETL processes. That combination seems far more likely to explain your performance problems than database links.
How does "taking a long time" translate to "failing"? If an SSIS job fails, that implies that an error is thrown. What is that error? Or are you saying that the job is technically successful but that it takes so long that it violates your SLA and, thus, the business considers the job to be a failure?
If you are actually doing set-based operations in your SSIS package and your package isn't throwing an error, it just takes too long to run, then you would probably need to start looking at tuning the individual set-based operations that your code is performing by looking at things like query plans in the database. It is possible that you're missing indexes or to ensure that joins happen on the correct server or that your load needs to be refactored to ensure that data is brought from the remote server only once rather than many times. We'd need to understand more about exactly what statements are taking a long time, however, to be able to help you tune them.
The solution that worked for me was to use Advanced Security, with netmgr or directly edit the sqlnet.ora by adding the parameters:
SQLNET.ENCRYPTION_SERVER = REQUIRED
SQLNET.ENCRYPTION_TYPES_SERVER = (AES256)
SQLNET.ENCRYPTION_CLIENT = REQUIRED
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256)
Once this is done it is necessary to restart the listener, for access to heterogeneous data base the sslmode = require parameter must be added in odbc.ini for postgresql, the ssl access must be previously configured in the postgres server and a registry with hostssl in the file pg_hba.conf ... I have not tried for sql server yet.
Regards
Best Answer
Based on this document from Oracle Support, the answer appears to be "No". A 19c client (the owner of your DB Link) is only supported/compatible with a server version of 11.2 or greater.
Conversely, a 10.2 database can only support a client as new as 12.1, so any upgrade of your 12.1 database to any newer version will break your db link.