In the first post in this category I introduced a way to perform DML in a SELECT using the following technology:
- Table Functions
- Autonomous Transactions
In this post I’ll add the following:
- A REF CURSOR as a parameter
- Row by row (aka slow by slow) processing of the REF CURSOR
- Raw parallel query processing to try to overcome the slow by slow processing
Lets get to the code shall we…
SQL> -- Create the table SQL> CREATE TABLE demo 2 ( col1 NUMBER, 3 col2 NUMBER ); Table created. SQL> -- create 1000 demo records SQL> BEGIN 2 FOR counter IN 1..1000 LOOP 3 INSERT INTO demo 4 VALUES(counter, 5 counter); 6 END LOOP; 7 END; 8 / PL/SQL procedure successfully completed. SQL> COMMIT; Commit complete. SQL> -- create a return type for the function we will write later SQL> CREATE OR REPLACE TYPE v_n_t AS TABLE OF NUMBER; 2 / Type created. SQL> -- now for the function SQL> /* SQL> Key points SQL> 1) The parameter is a REF cursor - how cool is that? SQL> 2) We loop through the records one by one aka SLOW BY SLOW... SQL> 3) The function deletes the rows it queries SQL> 4) This is madness SQL> 5) Yes, it is madness but I'm leading up to something SQL> 6) When will you make your point SQL> 7) 3 or 4 more posts; I promise SQL> 8) OK. This better be good... SQL> */ SQL> CREATE OR REPLACE FUNCTION delete_demo ( p_curs SYS_REFCURSOR ) 2 RETURN v_n_t AS 3 PRAGMA AUTONOMOUS_TRANSACTION; 4 v_ret_val v_n_t := v_n_t(); 5 v_number NUMBER; 6 BEGIN 7 LOOP 8 FETCH p_curs INTO v_number; 9 EXIT WHEN p_curs%NOTFOUND; 10 v_ret_val.EXTEND; 11 DELETE demo 12 WHERE col1 = v_number; 13 v_ret_val(1) := SQL%ROWCOUNT; 14 END LOOP; 15 COMMIT; 16 RETURN(v_ret_val); 17 END; 18 / Function created. SQL> -- One way to overcome the slow by slow processing is with MORE power! SQL> -- This can be provided by Parallel Query because we are using a SELECT here... SQL> ALTER SESSION FORCE PARALLEL QUERY; Session altered. SQL> SELECT COUNT(*) 2 FROM TABLE(delete_demo(CURSOR(SELECT col1 3 FROM demo))) 4 / COUNT(*) ---------- 1000 SQL> ALTER SESSION DISABLE PARALLEL QUERY; Session altered. SQL> SELECT COUNT(*) 2 FROM demo 3 / COUNT(*) ---------- 0 SQL> -- Check the parallel query server usage SQL> -- Please note the number of message sent and received SQL> -- because that become important in later posts SQL> SELECT statistic, 2 value 3 FROM v$pq_sysstat 4 WHERE value > 0 5 ORDER BY value DESC; STATISTIC VALUE ------------------ ------ Local Msgs Sent 22000 Local Msgs Recv'd 22000 Server Sessions 4000 DFO Trees 1000 Queries Initiated 1000 Servers Idle 12 Servers Started 12 Servers Highwater 4
At this point you may/should be thinking this is a lot of work just to delete records from a table. And you would be right.
It would be far simpler to just set the parallel degree for the table and run the DELETE directly against it.
And if you want to process the rows deleted just add a RETURNING clause to the DELETE statement.
And looping through the rows one by one will take forever!
Those are all valid arguments. And this series of posts is not intended to debunk any of them.
It’s intended to show Table Function as an option for processing large amounts of data especially when extra processing such as running totals must be added to the workload.
Over the next few articles I’ll expand on these topics. The next post introduces Parallelism for Table Functions.
Thanks for reading!