Wednesday, July 12, 2017

Profiling a Java + JDBC Application


First, there's NO Java coding needed nor Java source code needed to profile a Java program this way.  NetBeans added this a while back up I just found it recently.  The ability to attach to any Java program and profile the SQL going across JDBC. The dev team's blog on it is here:


SQLcl is our Java library for scripting sql/sql scripts that has been in SQLDev since day 0 back in '05/06.  We factored that and wrapped a cmd line around it.  This makes it easier to test for features, regressions, performance,....  as well to give a new cmd line with extended features.  This library is also what performs the Grid Infra installs these days as well embedded in Oracle REST Data Services. It's quite proven and tested. This is all Java bases using plain JDBC to talk to the database.  It's no different than any java based application which means anything done to profile it is applicable to any java program like say sqldev , ords, custom jdbc , any java program.


This new feature in Netbeans is very simple to use and there's no need to have the sources of the jsvs code.  Off the Profile menu - > Attach to External Process

Then set the Profile to SQL Queries

Click Attach, which shows a list of running java processes.  This is what SQLcl will look like.

Running the Program

Now once JDBC traffic starts being issued, it's captured with timings and occurrences of that statement along with the Java stack for where the call originated. Next up is the hardest part, what the heck does all this data mean? When is fast , fast enough?

What to change?

Below is what an APEX install looks like on my laptop during the middle of the process.  There's a lot of data to look at. The slowest statement is the dbms registry validation. Is that bad, can it be sped up? Probably not. The most called is the check for DBMS_OUPUT. Can that be reduced? Also, probably not.

This is when knowledge of the code and intended actions are critical.  For me, getting SQLcl the program from 19m down to 7m was fast enough. That was done with zero changes to the APEX install scripts but just from watching the traffic going to the database and analyzing that.

Change #1 : SQLcl was name resolving every create or replace <PLSQL OBJECT> then checking for errors on that object.  Much faster is to simply check count(1) from user_errors without the name resolution.  When there's no errors in the user_errors table, there's no need to name resolve. So that entire path was shortened.  It's visible in this stack with the 1,106 time "select count(1) cnt from user_errors" was called.  

Change #2: DBMS_OUTPUT was being called after any/all commands to the database. That was reduced to only calls that could invoke some output. For Example, alter session doesn't need to be checked. That change reduced the number of db calls being issued at all. Fastest call is the ones you don't make.

and on and on.

Nothing is more important than the knowledge of the intended outcome.

Tuesday, July 11, 2017

SQLcl 17.2

New Versioning Scheme

Starting with this release the numbering scheme is changed.  All releases will now be the YEAR<period>Quarter<period>build numbers.

So the new SQLcl is  

Breaking that down. 
  • 17   - Year
  • 2     - Quarter
  • 0     -  Patch number
  • 184 - Day in Julian
  • 0917 - hour and minute the build was done.

New Features

Securing Literals  which was introduced here : so this is not new.  What is new is controls over when it's done.  It was set so that SQLcl did secure all literals for anything that was issued. Now there's a control for when/how deep to check.

The default is any anonymous block less than 10 lines will be scrubbed automatically.  This will catch the majority of uses.  To ratchet up what is checked  "set secureliterals ON" will secure every block completely.  There is a performance impact to this if there are very large block such as in the APEX installation which has some blocks over 1k in size.

The opposite is there also to disable this feature: set secureliterals OFF

Here's an example of what happens. The 'abcxyz' is removed and turned into a bind :SqlDevBind1ZInit1

SQL> declare
  2    l_local varchar2(20);
  3  begin
  4    l_local := 'abcxyz';
  5    dbms_output.put_line(l_local || chr(10));
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> select sql_text from v$sql where sql_text like '%abcxyz%';
SqlDevBind1Z_1 VARCHAR2(32767):=:SqlDevBind1ZInit1;  
       l_local varchar2(20); 
       l_local := 'abcxyz';   
       dbms_output.put_line(l_local || chr(TO_NUMBER( SqlDevBind1Z_1))); 

New Performance

So I spent the better part of 2 week in the NetBeans profiler and the outcome is well worth the time.  ALL these numbers are on my laptop so milage will vary.  APEX is probably one of the largest / complicated set of sql / plsql scripts to install into a database so I used that as my baseline. The SQLcl version I started from took 19m27.352s to install APEX.  For comparison, I ran the same install with SQL*PLUS that took almost 10 full minutes less at 9m59.789s.  SOOOO clearly there was an issue here.

The key thing is knowing WHAT your application should be doing and how it should be doing it.  There were a number of things that SQLcl was being overly aggressive  about such as securing literals  which was introduced here Then there were calls that were repetitive and could simple be removed.  Then the next boost was from being more lax on dbms_output.  SQLcl was checking for things like DDL that clearly can't have output so no need to check.

The end result is that turned secure literals off and it now takes on my machine 7m17.635s

Thursday, June 29, 2017

Parameterizing Jmeter for testing APEX

A while ago we needed to stress a system by using the APEX Brookstrut demo application.  The obvious choice for this was Jmeter.  How to setup Jmeter to record web traffic by becoming a web proxy is very known and well written process.  Anyone that hasn't seen it, check this PDF and you can see how easy it is. There were a couple issues to get around. First importing the application again and again which may get a different Application ID with each import. Next is the hostname and port may change depending on the environment. Then there's using the same session ID in apex to avoid generating a new session more than needed.

That led to this setup.


The first 3 parts are for tracking the cookies, any http header but most important is the User Defined section. In here, I defined variables for the webserver ( aka ORDS ) hostname/port and protocol also the APEX application ID and the Home Page for the test.

The next step down is to define the HTTP Request Defaults. This is where the user variables start to come into play as you can see. The servername/port/protocol are all referencing the variables from above.

Initial Request

The next part of the test is to hit the home page of the application with a bogus session ID. This kicks apex into creating a real session ID.

Now to get that session ID for later I put in the JSR 223 Post Processor seen in the tree right below the test.

The actual javascript for the test is in this Gist:

The javascript is extracting the p_instance, p_flow_id,... which you see at the bottom of the script.  These are then placed into the dictionary for all subsequent requests to reference.

Now the only task left is to go to the recorded test an replace all the recorded parameters which are hardcoded to specific flowid, sessionid,.. and replace them with the variables from the dictionary.  For example this shows the ${p_flow_id} and ${_instance}

Now there's a recorded parameterized test that can be changed to point to any installation of an application quite easily.

Thursday, June 01, 2017

Oracle REST Data Services and Docker


1) check out
2) Download ORDS , SQLcl; optionally APEX
3) Build w/DB connection details
docker build -t krisrice/ords:3.0.10  --build-arg DBHOST= --build-arg DBSERVICE=orcl --build-arg DBPORT=1521 --build-arg DBPASSWD=oracle  .
4) Run the image
docker run -d -p 8888:8888 -p 8443:8443 --name=ords krisrice/ords:3.0.10
5) Access https://localhost:8433/ords/apex

GitHub Project

I started a spot in github for me to adjust a docker build as I need it.  There will be an official Oracle one published along side all the existing ones that has out on 

This is where I'll be learning what ords needs out of a docker.

The Build Structure

This docker is using the docker build arguments for passing in any information needed such as the db connect details to install ORDS and APEX. These are passed one by one with the --build-args X=Y. The initial ones include:
DBHOST    : IP Address of the database host
DBSERVICE : DB Service name to connect
DBPORT    : DB Port to connect
DBPASSWD  : SYS password
PORT  : HTTP Port for ORDS (Default: 8888)
SPORT : HTTPS Port for ORDS (Default: 8443)
APEXI : path to the apex images folder INSIDE the doc
These will expand to allow things like custom SSL cert loading, migrate apex REST -> ords REST, conn pool sizes,.... So best to check on the README in the project as I'll keep that up to date.

The build process itself is a fairly easy to follow bash script. There are a couple notable things in the build ( )

  1. It randomizes the passwords for the ORDS DB accounts used. It does the alter db user then burns the passwords into the ORDS configuration.
  2. It creates a docroot for serving up any files located at /opt/oracle/ords/doc_root

Optional Software

IF APEX is downloaded, it will install/upgrade the target database. This installs APEX, setups the APEX REST config, and sets the /i/ to the /opt/oracle/apex/images folder so it's ready to go.

IF SQLcl is download, it is installed into the image

Running the Image

There's a couple gotchas I've found running the image.

  1. Could be my setup but the EXPOSE option in the Dockerfile doesn't work for me to have the ORDS on localhost:8888.  This means the -p flag is needed to expose the port inside the docker image to the host machine.
  2. To run the image in the background simply give a -d 
  3. To get the logs of the image running in the background docker has a nice docker logs <container name> The trouble is the container name is randomly generated unless the --name=<NAME> is used.
This is how I start the image.

docker run -d -p 8888:8888 -p 8443:8443 --name=ords krisrice/ords:3.0.10

The build

IF the APEX is part of the build and is installing/updating, it will take longer than this.  Here's about 1 minute from nothing to up and running with ORDS and APEX in this docker image.

Thursday, April 13, 2017

ORDS Standalone and URI Rewrites

My last post How to add an NCSA style Access Log to ORDS Standalone explained what the ORDS standalone is and that is based on Eclipse Jetty.  Jetty offers far more than ORDS exposed in it's standalone.  There's a long list of all the features and configuration options listed in the documentation,

A recent question came up for doing URL rewrites.  Jetty does offer this as well.  To take advantage of it the same jetty-http.xml file from my last post just needs a few more lines of xml added.

This example will be just a simple one that rewrites /catalog to /ords/klrice/metadata-catalog/ The better usage of this would be to have / redirect into an APEX application or some home page of the application.

The full list of options are listed in the Jetty Documentation

<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "">
<Configure id="Server" class="org.eclipse.jetty.server.Server">

<!-- REWRITE -->
  <Get id="oldhandler" name="handler"/>
  <Set name="handler">
   <New id="Rewrite" class="org.eclipse.jetty.rewrite.handler.RewriteHandler">
    <Set name="handler"><Ref refid="oldhandler"/></Set>
    <Set name="rewriteRequestURI"><Property name="rewrite.rewriteRequestURI" default="true"/></Set>
    <Set name="rewritePathInfo"><Property name="rewrite.rewritePathInfo" default="false"/></Set>
    <Set name="originalPathAttribute"><Property name="rewrite.originalPathAttribute" default="requestedPath"/></Set>
  <Ref refid="Rewrite">
<!-- REWRITE of /catalog ---- /ords/klrice/metadata-catalog/ -->
<Call name="addRule"> <Arg> <New class="org.eclipse.jetty.rewrite.handler.RewritePatternRule"> <Set name="pattern">/catalog</Set> <Set name="replacement">/ords/klrice/metadata-catalog/</Set> </New> </Arg> </Call> </Ref>
  <Ref id="Handlers">
    <Call name="addHandler">
        <New id="RequestLog" class="org.eclipse.jetty.server.handler.RequestLogHandler">
          <Set name="requestLog">
            <New id="RequestLogImpl" class="org.eclipse.jetty.server.NCSARequestLog">
              <Set name="filename"><Property name="jetty.logs" default="/tmp/"/>ords-access-yyyy_mm_dd.log</Set>
              <Set name="filenameDateFormat">yyyy_MM_dd</Set>
              <Set name="retainDays">90</Set>
              <Set name="append">true</Set>
              <Set name="extended">false</Set>
              <Set name="logCookies">false</Set>
              <Set name="LogTimeZone">GMT</Set>

The result is as expected.  This could be used for anything from shorter REST points like this catalog example or nicer entry points into APEX application.

The example for an APEX URL would be:

<!-- REWRITE of /awesomeapp ---- /ords/f?p=105:2:::::: -->

<Call name="addRule">
   <New class="org.eclipse.jetty.rewrite.handler.RewritePatternRule">
     <Set name="pattern">/awesomeapp</Set>
     <Set name="replacement">/ords/f?p=105:2::::::/</Set>

Disqus for Kris' Blog