I agree the Oracle docs can be a bit bland, however they are in general very complete. Once you learn how to locate the relevant piece of information you're looking for, they are often the best resource you can find online.
In your case I would suggest you take a look at the PL/SQL Packages and Types Reference book, where you will find the complete documentation of all standard packages.
The UTL_FILE
chapter, contains a collection of examples, in particular how to set up your directories to enable access.
Once you have setup your directory object, you can create a file in PL/SQL with something like the following :
SQL> CREATE DIRECTORY tmp_dir AS '/tmp/';
Directory created
SQL> DECLARE
2 l_file utl_file.file_type;
3 BEGIN
4 l_file := utl_file.fopen('tmp_dir', 'my_file.csv', 'W');
5 FOR cc IN (SELECT * FROM all_objects WHERE ROWNUM <= 5) LOOP
6 utl_file.put_line(l_file, cc.object_name||','||cc.object_type);
7 END LOOP;
8 utl_file.fclose(l_file);
9 END;
10 /
PL/SQL procedure successfully completed
SQL> DECLARE
2 l_file utl_file.file_type;
3 l_line VARCHAR2(100);
4 BEGIN
5 l_file := utl_file.fopen('tmp_dir', 'my_file.csv', 'R');
6 LOOP
7 utl_file.get_line(l_file, l_line);
8 dbms_output.put_line(l_line);
9 END LOOP;
10 EXCEPTION
11 WHEN no_data_found THEN -- EOF
12 utl_file.fclose(l_file);
13 END;
14 /
C_OBJ#,CLUSTER
I_OBJ#,INDEX
TAB$,TABLE
CLU$,TABLE
C_TS#,CLUSTER
"How bad is it?" depends on the degree to which you are suffering now or could suffer with increased workload in the future.
One major point of suffering with plan cache pollution could be too many single use plans bloating your plan cache leading to inefficient cache usage.
Another point of suffering could be high compilations/second - so in an environment with a heavy workload and a lot of activity, there is a cost associated with compiling over and over.
You can see the impact of compilations/sec in perfmon (SQL Server Statistics:Compilations/sec). This can look like CPU pressure. To your performance/applications, this can look like increased query duration waiting for needless compiles each time it runs.
You can see the impact to the plan cache from the memory bloat by this query borrowed from Glenn Berry's Diagnostic scripts. How big is your SQLCP plan cache?
SELECT TOP(10) [type] AS [Memory Clerk Type],
SUM(pages_kb)/1024 AS [Memory Usage (MB)]
FROM sys.dm_os_memory_clerks WITH (NOLOCK)
GROUP BY [type]
ORDER BY SUM(pages_kb) DESC OPTION (RECOMPILE);
Also the query that was used in the question to identify the number of plans helps as well.
Is This Ever a Good Thing?
There are some cases where this could be good, but the situation is rare. Basically if you were suffering from parameter sniffing gone bad (nutshell: if the data can vary widely from execution to execution based on parameters, one compilation for one set of parameters ideal may yield an excellent query plan for that one query but poor for others.). My guess is that you likely wouldn't be dealing with that as bad as the implications from poor plan reuse.
What Can You Do About It?
Optimize For Ad Hoc Workloads can certainly help with the memory implications since only a stub of the plan is stored in cache at first execution, and the full plan isn't stored until it is executed a second time with the same plan.
Forced Parameterization could help here also. It can sometimes force parameterization to happen and help solve both the issue of cache bloat and the cost of having to recompile.
Fix The Queries Ideally, you shouldn't have to resort to these options, but instead can be more strict in your database development, encourage plan reuse, consider stored procedures for all of their benefits, and attempt to head off the problem that way. The ways to help fix this through forced parameterization or optimize for ad hoc are good to help, but the best solution is always aimed at the root cause.
There is an excellent resource here that talks about some of the dangers of plan cache pollution and some things you can do. I'd recommend a read here. It is written for SQL Server 2012, but the concepts and solutions apply.
Best Answer
I am fairly confident the same would be true for DDL changes but did you mean DML? The long and short is that Oracle is not going to allow an inconsistent result.