Use a bind variable anywhere that you might actually change the value that is passed in.
It probably doesn't make sense, for example, to use a bind variable in the EXISTS
clause in your third example because it seems extremely unlikely that anyone would ever want to pass in a different constant for the EXISTS
clause.
In your second example, it probably does make sense to use a bind variable rather than a hard-coded status assuming that the application(s) are likely to want to run this query for different id
and status
values. On the other hand, if the code would only ever pass in a status
of 2 for this query, it probably doesn't make sense to use a bind variable. Assuming that status
is not evenly distributed, you'd be better off giving the optimizer the information that you're always going to use a status
of 2 rather than trying to rely on bind variable peeking to give the optimizer that information.
If the value is never going to change-- i.e. if the status
in your second query will always be 2-- you can execute the query as many times as you'd like and it will only appear once in v$sql
or v$sqlarea
. In my case, I'll execute the query 10 times
SQL> ed
Wrote file afiedt.buf
1* select * from v$sql where sql_text like '%jc_bind_test%'
SQL> select /* jc_literal_test */ count(*)
2 from emp
3 where ename = 'KING';
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
SQL> /
COUNT(*)
----------
1
And from v$sql
or v$sqlarea
, you'll see only one row for the query that has the hard-coded literal. That row will show that the query was executed 10 times
SQL> ed
Wrote file afiedt.buf
1* select sql_text, executions, loaded_versions, parse_calls from v$sqlarea where sql_text like '%jc_literal_test%'
SQL> /
SQL_TEXT EXECUTIONS LOADED_VERSIONS PARSE_CALLS
---------------------------------------- ---------- --------------- -----------
select * from v$sql where sql_text like 1 1 1
'%jc_literal_test%'
select sql_text, executions, loaded_vers 5 1 5
ions, parse_calls from v$sql where sql_t
ext like '%jc_literal_test%'
select sql_text, executions, loaded_vers 1 1 1
ions, parse_calls from v$sqlarea where s
ql_text like '%jc_literal_test%'
select /* jc_literal_test */ count(*) 10 1 10
from emp where ename = 'KING'
It depends on why you are creating the temporary tables in MySQL.
Frequently, people that are creating temporary tables in other databases are doing so in order to work around limitations that don't exist in Oracle where readers don't block writers and writers don't block readers. In other databases, you commonly copy data from a permanent table to a temporary table so that your process doesn't block some other process that needs the same data. Since Oracle provides multi-version read consistency, however, this isn't necessary (or beneficial) in Oracle-- your process can process the data sitting in the real tables without worrying that it is going to block someone else. If that's the situation you're in, the proper response is simply to remove the temporary tables and process the data from the permanent tables.
Assuming that you really need a temporary copy of the data, you can create global temporary tables. You would create these tables outside of your code just like a permanent table and use them just like a permanent table inside your code. The global temporary table ensures that each session can only see the data that session has inserted. The only difference is that you're not dropping and recreating the structure of the table inside your procedure.
Another alternative would be to pull the data into a PL/SQL collection that you work with rather than using a temporary table. PL/SQL collections are stored in the server's PGA (one of Oracle's memory structures) so you generally want to limit the size of the collection particularly if there is a lot of data to process or there could be many sessions processing data simultaneously. You can do something like
DECLARE
TYPE emp_tbl IS TABLE OF emp%rowtype;
l_emps emp_tbl;
CURSOR emp_cur
IS SELECT *
FROM emp;
BEGIN
OPEN emp_cur;
LOOP
-- Fetch 10 rows at a time from the cursor into the collection.
-- You'd realistically want a larger limit, something between 100 and 1000 generally
FETCH emp_cur
BULK COLLECT INTO l_emps
LIMIT 10;
EXIT WHEN l_emps.COUNT = 0;
-- An example of manipulating the collection in memory
FOR i IN 1 .. l_emps.COUNT
LOOP
l_emps(i).sal := l_emps(i).sal * 2;
END LOOP;
-- And an example of using the collection to update a table
FORALL i IN 1 .. l_emps.COUNT
UPDATE emp
SET sal = l_emps(i).sal
WHERE empno = l_emps(i).empno;
END LOOP;
END;
Of course, in the example above, it would be far simpler (and more efficient) to simply issue an UPDATE
statement that would double every employee's salary.
Best Answer
That's relatively easy. Using the UTL_FILE package in Oracle you can create a file in the servers filesystem and write the output of any PL/SQL statement to it. You can actually also use it to read data from a file. http://psoug.org/reference/utl_file.html
You might also want to read up a little on the Directory object type as well, since that's a prerequisite for using the UTL_FILE package; http://psoug.org/reference/directories.html
-- Here's an example of a procedure that writes ascii data to an existing Oracle directory object. The user running the code needs to have read/write access to the Oracle Directory object.
The p_data parameter is a CLOB that would ( in this case ) contain all the ascii data being written to the p_dir directory and the p_file name ( which gets created at runtime ). You might need to do some clob conversions on your data, depending on the data types in your query results that you'd use to populate the clob object.