KSCOPE Answer 1 of 2

At the recent ODTUG KSCOPE conference I used this script to demonstrate 3 concepts:

  1. Records fetched by a Polymorphic Table Function (PTF) can be removed from the output by specifying a replication factor of zero
  2. Values for added columns can be replicated across returned rows
  3. Great care is required to coordinate the replication of fetched rows and added columns

The output of the script was this:

YEAR    CITY    COL1 COL2
---- ---------- ---- ----
2019 Seattle    B      10
2019 Seattle    C      20
2019 Seattle    D      30
2017 San Antonio

The PTF fetched 3 rows; then replicated one of them 3 times (Seattle), removed one of them (Orlando) and replicated one of them (San Antonio) once for a total of four output rows. But the rows of added columns (COL1 and COL2) were not replicated enough so the final row had NULL values for them.

A thoughtful attendee asked if the opposite (more added rows than fetched rows could ever occur). The answer is no but I did not have time to demonstrate it so I’m doing that with this post.

I made the following change to add 100 rows to the added columns no matter what.

FOR counter IN 1..100 LOOP
  -- add a numeric value for NEW_COL1 in the row
  v_n_t.tab_number(counter) := 10 * counter;

And then after the loop I added this:

DBMS_OUTPUT.PUT_LINE('Count = ' || v_n_t.tab_number.COUNT);

Now the output is this:

YEAR       CITY COL1 COL2
---- ---------- ---- ----
2019 Seattle    B      10
2019 Seattle    C      20
2019 Seattle    D      30
2017 San Antonio

Count = 100

The number of rows returned was not influenced by extra rows of added columns.

Thanks for reading!

Table Polymorphism and 1024

This post expands on TABLE POLYMORPHIC functionality by demonstrating how records are distributed across partitions. Two key points:

  1. The maximum number of rows per rowset is 1024
  2. Within the FETCH_ROWS procedure you can process the whole fetched data set at one time if desired.

First some demo data – 250 records each for COL1 values of 1 through 5.

CREATE TABLE table1
( col1 NUMBER );

BEGIN
  FOR x IN 1..5 LOOP
    FOR y IN 1..250 LOOP
      INSERT INTO table1
      VALUES(x);
    END LOOP;
  END LOOP;
END;

Here’s what the data looks like grouped by COL1.

SELECT col1,
       COUNT(*)
  FROM table1
GROUP BY col1
ORDER BY col1;

COL1 COUNT(*)
---- --------
   1      250
   2      250
   3      250
   4      250
   5      250
5 rows selected.

And the package header.

CREATE OR REPLACE PACKAGE poly_ptf AS

  FUNCTION describe (p_tbl IN OUT DBMS_TF.TABLE_T )
    RETURN DBMS_TF.DESCRIBE_T;
  PROCEDURE open;
  PROCEDURE fetch_rows;
  PROCEDURE close;

END;

And the package body.

CREATE OR REPLACE PACKAGE BODY poly_ptf AS

  /*----------------------------------------------*/
  PROCEDURE display ( p VARCHAR2 ) IS
  /*----------------------------------------------*/
    v_x NUMBER;
  BEGIN
    DBMS_OUTPUT.PUT_LINE(p);
    IF NOT DBMS_TF.XSTORE_EXISTS('X',
           DBMS_TF.XSTORE_TYPE_NUMBER) THEN
      DBMS_TF.XSTORE_SET('X',1);
    END IF;
    DBMS_TF.XSTORE_GET('X',v_x);
    DBMS_TF.XSTORE_SET('X',v_x + 1);
    DBMS_TF.TRACE(v_x,TRUE);
  END;

  /*----------------------------------------------*/
  FUNCTION describe (p_tbl IN OUT DBMS_TF.TABLE_T )
  /*----------------------------------------------*/
    RETURN DBMS_TF.DESCRIBE_T IS
  BEGIN
    FOR counter IN 1..p_tbl.column.count LOOP
      p_tbl.column(counter).for_read := TRUE;
    END LOOP;
    RETURN NULL;
  END;

  /*----------------------------------------------*/
  PROCEDURE open IS
  /*----------------------------------------------*/
   v_x NUMBER;
  BEGIN
   display('OPEN');
  END;

  /*----------------------------------------------*/
  PROCEDURE fetch_rows IS
  /*----------------------------------------------*/
    v_rowset DBMS_TF.ROW_SET_T;
    v_ntab DBMS_TF.TAB_NUMBER_T;
  BEGIN

    DBMS_TF.GET_ROW_SET(v_rowset);
    display('FETCH_ROWS');

    v_ntab := v_rowset(1).tab_number;

    -- Here is where I process the whole fetched
    -- data set at one time by querying its
    -- associated values in tab_number
    FOR x IN ( SELECT column_value cv,
                      COUNT(*) c
                 FROM TABLE(v_ntab)
               GROUP BY column_value ) LOOP

      DBMS_OUTPUT.PUT_LINE('COL1=' || x.cv || ' : ' || x.c);

    END LOOP;

  END;

  /*----------------------------------------------*/
  PROCEDURE close IS
  /*----------------------------------------------*/
  BEGIN
    display('CLOSE');
  END;

END;

And the function.

CREATE OR REPLACE FUNCTION poly_func(p_tbl IN TABLE )
  RETURN TABLE PIPELINED
  TABLE POLYMORPHIC USING poly_ptf;

And the query:

SELECT *
FROM poly_func(table1 PARTITION BY col1 ORDER BY col1 );

And the results (I’ve removed the display of 1250 rows to save space and  added blank lines to show the separation better in the DBMS_OUTPUT).

OPEN
1 <id=6804000000000000>
FETCH_ROWS
2 <id=6804000000000000>
COL1=1 : 250
CLOSE
3 <id=6804000000000000>

OPEN
1 <id=6904000000000000>
FETCH_ROWS
2 <id=6904000000000000>
COL1=2 : 250
CLOSE
3 <id=6904000000000000>

OPEN
1 <id=6A04000000000000>
FETCH_ROWS
2 <id=6A04000000000000>
COL1=3 : 250
CLOSE
3 <id=6A04000000000000>

OPEN
1 <id=6B04000000000000>
FETCH_ROWS
2 <id=6B04000000000000>
COL1=4 : 250
CLOSE
3 <id=6B04000000000000>

OPEN
1 <id=6C04000000000000>
FETCH_ROWS
2 <id=6C04000000000000>
COL1=5 : 24
FETCH_ROWS
3 <id=6C04000000000000>
COL1=5 : 226
CLOSE
4 <id=6C04000000000000>

The interesting things is the last group used two FETCH_ROWS calls. This is because the magic 1024 records was hit as follows:

COL1 = 1 250 rows so total is 250
COL1 = 2 250 rows so total is 500
COL1 = 3 250 rows so total is 750
COL1 = 4 250 rows so total is 1000
COL1 = 5  24 rows so total is 1024

Then the last FETCH_ROWS was

COL1 = 5 226 rows so total is 1250

Thus I have to break my statement about 1024 into two parts:

  1. For ROW POLYMORPHISM it is that maximum number of rows in a rowset
  2. For TABLE POLYMORPHISM it is the number of cumulative rows at which another FETCH_ROWS is done

I cant help but think there is more to the story. I’ll continue to dig in future posts.

Thanks for reading!