Why does the code need to be more efficient? It seems unlikely that you would be dropping large numbers of tables frequently or that it would take enough time to drop tables that it would be worth optimizing.
What trade-offs are you willing to make to cause the code to be more efficient? And what do you mean by "efficient" in this context-- wall clock time? The resources consumed on the server? Something else? The code that you have written is very straightforward and easy to follow. You could complicate things by, for example, submitting multiple jobs that each drop a subset of the tables so that you can have multiple threads working simultaneously. But that would involve making the code much more complicated-- you'd need to assign work to different threads and then coordinate the responses from each thread to verify that all the tables were dropped successfully. It seems most unlikely that this would be a trade-off that you would really want to make though it would reduce the wall clock time needed to drop all the tables.
Well, you could have a procedure that checked for various values of sqlcode
or sqlerrm
and performed whatever actions you have in mind based on those, and called that procedure from your exception handler passing the filename and directory object name. However, I don't think that would be a neat solution.
If the problem is trying to reproduce the (quite old) Oracle Base approach for handling utl_file
exceptions in multiple procedures, don't do that. It just copies every single exception from the manual and dumps them all into a monumental end-of-procedure exception handler, regardless of context.
When you open a file, you should use an exception handler to ensure that it opened OK, otherwise you can report that you couldn't open file x in folder y. For example (from www.williamrobertson.net/documents/refcursor-to-csv.shtml):
begin
l_file :=
utl_file.fopen
( filename => p_filename
, location => p_directory
, open_mode => 'w'
, max_linesize => 4000 );
exception
when utl_file.invalid_operation then
raise_application_error(-20007, 'File '''||p_filename||''' could not be opened in directory '||p_directory||' ('||k_dir_path||')', TRUE);
when utl_file.invalid_path then
raise_application_error(-20008, 'File location '||p_directory||' ('||k_dir_path||') is invalid.', TRUE);
when utl_file.invalid_filename then
raise_application_error(-20009, 'Filename '''||p_filename||''' is invalid.', TRUE);
end;
Similarly, when you write to a file you could use exception handlers to ensure that write errors are reported clearly with appropriate details.
Or, centralise all CSV writing operations in a single package like the one I linked to, where you just pass in a refcursor and the file details and it does the rest.
Best Answer
If you've no need to store the date component, then an
interval day(0) to second
is the most practical approach. The biggest advantage comes if you need to do any calculations finding how long has elapsed between two times as you don't need to do any extra processing out of the box.For example:
You also have standard functions to extract the various components of the time (hour, minute, etc.):