The 2020 Project

Way back in the previous decade I asked some PL/SQL developers how they might use Polymorphic Table Functions (PTF) and JSON Dataguides. Almost all of the replies involved translating JSON keys and values into columns and rows on the fly. So with that and a brand new decade in mind I’m embarking on a new series called the 2020 Project.

In a nutshell I’ll be building a PTF to go from this JSON data stored in an Oracle DB.

SELECT *
  FROM json_stuff;

JSON_ID    JSON_DATA
------- ---------------------------------
      1 {"KEY1":"VALUE1"}
      2 {"KEY2":"VALUE2","KEY3":"VALUE3"}

To displaying the keys and values as columns and values like this:

SELECT key1,
       key2,
       key3
  FROM add_dataguide_func(json_stuff);

KEY1   KEY2   KEY3
------ ------ ------
VALUE1
       VALUE2 VALUE3

Over the next few months I’ll assemble and explain code to perform the translation. I’ll post the code on the Oracle livesql site so you can follow along. The introduction code snippets are available here.

NB1 If you just need to qucikly get translations done then have a look a Oracle’s Dataguide indexes and views created by the DBMS_JSON package.

NB2 If you are looking for an in depth tour of PTF’s and JSON Dataguides then I think you’ll enjoy this series.

Thanks for reading!

KSCOPE Answer 2 of 2 – Part 3

This post concludes the series on imitating a CONNECT BY SQL query with a PTF. It calculates the CONNECT BY values (dates in this case) and outputs them as a TRACE column.

First I add the trace column in the DESCRIBE function.

  v_new_col DBMS_TF.COLUMN_METADATA_T;
  v_new_cols DBMS_TF.COLUMNS_NEW_T;
...
  v_new_col := DBMS_TF.COLUMN_METADATA_T( 
         type    => DBMS_TF.TYPE_VARCHAR2,
         name    => 'TRACE',
         max_len => 100 );
  v_new_cols := DBMS_TF.COLUMNS_NEW_T( 1 => v_new_col);
...
  RETURN DBMS_TF.DESCRIBE_T ( new_columns => v_new_cols );

Then in the FETCH_ROWS procedure I add the TRACE column…

  v_dt DATE;
  v_trace VARCHAR2(500);
  v_trace_cols DBMS_TF.TAB_VARCHAR2_T;

…and populate it.

FOR row_counter IN 1..v_rowcount LOOP

  -- get the date value
  v_dt := v_rowset(col_counter).tab_date(row_counter);
  v_trace := v_trace || v_dt || '-';

  v_trace_cols(row_counter) := v_trace;

END LOOP; -- every row

DBMS_TF.PUT_COL( columnid => 1,
                 collection => v_trace_cols );

Here is the simplistic test data set I use.

SELECT partno,
       sdate,
       edate
  FROM prices;

PARTNO SDATE     EDATE
------ --------- ---------
     1 24-AUG-19 27-AUG-19
     1 27-AUG-19 30-AUG-19
     2 24-AUG-19 27-AUG-19
     2 27-AUG-19 30-AUG-19

And here is the result with the PTF.

SELECT partno,
       trace
  FROM ptf_func( prices
                 PARTITION BY partno
                 ORDER     BY sdate );

PARTNO TRACE
------ ----------------------------------------
     1 24-AUG-19-
     1 24-AUG-19-27-AUG-19-
     2 24-AUG-19-27-AUG-19-24-AUG-19-
     2 24-AUG-19-27-AUG-19-24-AUG-19-27-AUG-19-

That seems pretty straight forward because I make a some assumptions:

  • The date ranges are already exclusive in the table
      • I don’t believe this code will handle overlaps
  • There are less that 1024 rows in the table
      • See other posts in this PTF series about the 1024 rows feature

Thanks for reading.

KSCOPE Answer 2 of 2 – Part 2

The next step to performing CONNECT BY queries in a Polymorphic Table Function is to add the explicit fetch. First add this to the DESCRIBE function to ensure all columns are fetched.

FOR counter IN 1..p_tbl.column.count LOOP
  p_tbl.column(counter).for_read := TRUE;
END LOOP;

Then add a explicit FETCH to the FETCH_ROWS procedure.

DBMS_TF.GET_ROW_SET( rowset => v_rowset,
                     row_count => v_rowcount,
                     col_count => v_colcount);

And add a code stub to interrogate the rows as they are fetched.

-- for every get column...
FOR col_counter IN 1..v_colcount LOOP

  -- if the column is SDATE or EDATE then...
  IF v_rowset(col_counter).description.name IN ('"EDATE"','"SDATE"') THEN

   -- for every row in the row set...
   FOR row_counter IN 1..v_rowcount LOOP

     -- get the date value
     v_dt := v_rowset(col_counter).tab_date(row_counter);

   END LOOP; -- every row

  END IF; -- column is SDATE

END LOOP; -- every column

I’m wary of how well this whole effort will perform so I’m going to run another performance test similar to the one in the previous post. Here are the results.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.07       0.07          0        201          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch    20004      0.77       2.92          0      99786          0      300000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    20010      0.85       3.00          0      99987          0      300000

Some interesting points when comparing this run to the previous test without the explicit fetch.

  • CPU went down from 2.33 to 0.77
  • Elapsed went up from 2.81 to 3.00
  • Queried went down from 201-0-99786 to 200-0-99785

What caused the reduction in CPU?

I’ll dig into that in the next few articles in this series.