Power Of Pipelining

Previous posts in this category introduced a way to perform DML in a SELECT using the following technology:

  • Table Functions
  • Autonomous Transactions
  • A REF CURSOR as a parameter
  • Row by row (aka slow by slow) processing of the REF CURSOR
  • Parallel processing to overcome the slow by slow processing

This post adds pipelining to the arsenal. Pipelining returns records from the function as soon as they are assembled rather than waiting for the whole function to finish.

This expedites things by allowing calling programs to start processing sooner.

The demo table and data is shown in the previous entry at Parallel By Range

Let’s get right to code shall we…

CREATE OR REPLACE FUNCTION delete_demo ( p_curs demo_curs.v_demo_curs )
                  RETURN v_n_t
                  PARALLEL_ENABLE ( PARTITION p_curs BY RANGE(col1) )
                  ORDER p_curs BY ( col1 )
                  PIPELINED AS --> identifies this function as pipelined

    PRAGMA AUTONOMOUS_TRANSACTION;
    v_number NUMBER := 0;

    v_last_col1 NUMBER := NULL; -- last col1 value processed
    v_total NUMBER := 0; -- running total of col2 for each col1
    v_col1 NUMBER; -- col1 value queried
    v_col2 NUMBER; -- col2 value queried
    v_rc NUMBER := 0; -- save the rowcount

BEGIN
  --
  -- Important Points:
  --   1) The code does not change much from the non pipelined version
  --   2) Except the RETURN call does not specify a value because values are sent back by the...
  --   3) ..PIPE ROW calls
  --   4) This still looks like SLOW BY SLOW processing
  --   5) I know and I promised ways to get past that!
  --   6) You mean parallel processing and this funky pipelining thing?
  --   7) Yes!
  --
  LOOP
    -- get a row from cursor
    FETCH p_curs INTO v_col1, v_col2;
    EXIT WHEN p_curs%NOTFOUND;

    -- initialize last col1 if this is first record from cursor
    IF v_last_col1 IS NULL THEN
      v_last_col1 := v_col1;
    END IF;

    -- if same col1 value then add to total
    IF v_last_col1 = v_col1 THEN
      v_total := v_total + v_col2;
    ELSE
      --
      -- not same col1 value so save log and
      -- re-initialize tracking values
      --
      INSERT INTO demo_log
      VALUES(v_last_col1,v_total);
      v_last_col1 := v_col1;
      v_total := v_col2;

    END IF; -- if same col1

    -- delete the record and return the rowcount
    DELETE demo
    WHERE col1 = v_col1;

    v_rc := SQL%ROWCOUNT;
    COMMIT;
    --
    -- return a row now with the PIPE ROW call
    -- no need to wait until all rows have been processed
    -- the calling program (SQL*Plus in this case) can start processing it
    -- now before the function finishes
    PIPE ROW(v_rc);

  END LOOP;

  -- dont forget to log the last total
  IF v_last_col1 IS NOT NULL THEN
    INSERT INTO demo_log
    VALUES(v_last_col1,v_total);
  END IF;

  COMMIT;

  -- note the return is still required but it does not specify a value
  -- because everything was returned by prior PIPE ROW calls
  RETURN;

END;

Executing the function does not change.

SQL> ALTER SESSION ENABLE PARALLEL QUERY;
Session altered.

SQL> SELECT COUNT(column_value)
  2    FROM TABLE(delete_demo(CURSOR(SELECT col1,
  3                                         col2
  4                                    FROM demo
  5                                  ORDER BY col2))) -- ordered by col2 to create randomness
  6 /
COUNT(COLUMN_VALUE)
-------------------
                100

SQL> ALTER SESSION DISABLE PARALLEL QUERY;
Session altered.

SQL> --
SQL> -- See what wound up in the demo log table
SQL> --
SQL> SELECT *
  2    FROM demo_log
  3  ORDER BY col1
  4 /

      COL1      TOTAL
---------- ----------
         1         10
         2        110
         3         90
         4        220
         5        250
         6        330
         7        490
         8        440
         9        810
        10        550
10 rows selected.

Those results are the same as from the non-pipelined version of the code at Parallel By Range.

Was the pipeline version faster? I’ll discuss ways to validate that in the next set of posts.

I’ve added a new post showing how to check what PQ server is being used to execute code. Check it out Here.

Thanks for reading.