Thursday, July 30, 2015

PL/SQL: The Good Parts

Somewhat inspired by Douglas Crockford's Javascript: The Good Parts presentation (and book), I have created a presentation called PL/SQL: The Good Parts.

This is not intended as an introduction to PL/SQL, but rather an overview of features you should consider if you want to maximize your enjoyment of programming in the Oracle Database. Also, this is not an exhaustive list of all features, it's simply the ones I personally use frequently.



Note that unlike Javascript, I don't consider PL/SQL to have too many "bad parts" or bizarre quirks. There's a lot of good stuff, hence the title! :-)

Click here to view the presentation.



Thursday, July 23, 2015

ORA-22926 when using getClobVal to convert XMLType to CLOB

I ran into a problem the other day when moving some code from one database to another (both XE 11g). The code in question needs to convert an XMLType to a CLOB to do some (hacky) string manipulation on it, and then turn it back to an XMLType.

Here's the original code:

  l_xml := apex_web_service.make_request(...);
 
  -- little hack to remove bad empty namespace from result
  l_clob := l_xml.getClobVal();
  l_clob := replace (l_clob, '<mytag xmlns="">', '<mytag>');
  -- and then strip out bogus namespace to make parsing easier...
  l_clob := replace (l_clob, ' xmlns="http://tempuri.org/"', '');
 
  l_xml := xmltype (l_clob);


The above had been working before, but now failed with "ORA-22926: specified trim length is greater than current LOB value's length".

Googling turned up a forum post which pointed out that getClobVal is deprecated, and the suggestion is to use XMLSerialize instead.

This can only be used in a SQL statement, not as a PL/SQL expression, so I had to rewrite as follows:

select xmlserialize(document l_xml as clob)
into l_clob
from dual;


And the error went away... (It seems some people have had problems even with the xmlserialize function, but that's quite an old post, so that particular bug could have been fixed a long time ago.)

Thursday, July 16, 2015

Installing Oracle XE, ORDS and Apex on CentOS - Part Four: Stress testing

This is part four in a series of blog posts about how to install Oracle 11g Express Edition (XE) with Oracle Application Express (Apex) on a CentOS Linux server, with Apex served by Oracle REST Data Services (ORDS) running on top of Tomcat and Apache.

After setting up a small CentOS server with Oracle Express Edition (XE), I wanted to stress test it to see how much load such a barebones installation can handle. In other words, how many users can you serve using this minimal, license-free setup?

The test application

I set up a test application in Apex with a single page that has both some static and dynamic (PL/SQL) regions, as well as a query and some processes. I set the page authentication to "No authentication" as the various testing tools need to be able to access this page without logging in (in which case we'd have to make a more complex test script).


Just checking the page via a web browser we can see that it is delivered fairly quickly from the server, in less than 250ms, and that static resources are cached. (Not all automated load testers respect cache settings, though. More about that later.)



First tests - BlazeMeter.com


For testing, I started with BlazeMeter.com, as they have a free plan that can stress test a site with up to 50 concurrent users. This is probably more concurrent users than most business applications built for small and medium companies will have/need, and so the free test works well for our purposes. (This obviously depends on your definition of small/medium company, but if you have so many users/employees, then why are you using Oracle XE? :-)

Setting up a stress test using BlazeMeter is easy, although the user interface is a bit cluttered with advanced options that can all be ignored for simple tests. Because we only want to load test a single page, we need to create a so-called "URL Test", and give it a name and specify the URL. Select an appropriate location to run the test from, the number of users (50 is the max for the free account) and the duration (the default is 20 minutes, which may be a bit long for simple tests; you can set this to 5 or 10 minutes for a quick test).






Running the first test

Having set up the test page in Apex and the test itself in Blazemeter, I started the test and watched it... crash and burn! :-( This is what the test results looked like:



As you can see from the chart above, more than a third of the requests resulted in errors returned from the web server, and the average response time is almost 10 seconds! Not good! I checked the Tomcat logs (at /usr/share/tomcat7/latest/logs if you have followed the setup in this series of blog posts) and found lots of this message:

java.sql.SQLException: Exception occurred while getting connection: oracle.ucp.UniversalConnectionPoolException: All connections in the Universal Connection Pool are in use

Turns out the default ORDS connection pool size is too small. I found this article which gives some advice. I changed the following in the ORDS configuration (/u01/ords/config/ords/conf/apex.xml if you have used the config location suggested in my previous blog posts) and restarted Tomcat:



Re-running the first test


Re-running the test gave the following result:



Much better! No errors, and the response time is pretty good for this minimal server. Remember it only has 1 CPU and 1GB of memory, running Oracle XE, Tomcat and Apache. And yet it quite happily serves 50 concurrent users (average of 3,5 requests per second) without any problems.

Scaling up


One of the nice things about DigitalOcean (and other cloud server providers) is that it is easy to scale up (and down) the server as needed. I decided to re-run the same test using a server with 2 CPUs and 4 GB of memory (although Oracle XE is limited to 1 CPU and 1 GB of memory, it should free up some more resources for the web server and other OS processes).



The test results on this somewhat bigger server shows a flatter response time curve, compared to the 1GB server which had a couple of random peaks. The average response time is less than 700ms, compared to 950ms for the smaller server. We are still limited to 50 users (average of 3,5 requests per second) since this is the maximum that BlazeMeter's free plan allows.



More testing - LoadImpact.com


I did further tests with LoadImpact.com, because they have a free plan with up to 100 concurrent users (or VUs, "Virtual Users", in their terminology). This is the test result with 100 concurrent users, run against the minimal server which been scaled back down to just 1 CPU and 1 GB of memory.



As the chart indicates, the response time remains more or less flat regardless of the number of concurrent users.

Never mind that the chart indicates that each page view took several seconds to complete. The FAQ page of LoadImpact states that "simulated clients in a test will never cache anything (except for cookies). This means that in a test, every client that loads a page from your site will behave like a new visitor to the site and thus be quite 'heavy' on the server." Which means that actual page load times (when client caching is enabled) will be significantly better than the tests indicate, since static files don't have to be downloaded every time.

In other words, the flat response time curve means that the application scales very well, and could probably support many more users. Quite impressive, really, for a setup that costs just USD 10 per month! :-)

Even more tests - loader.io


Finally, I tested with loader.io, who offer a free plan with up to 10.000 (!) clients in 1-minute tests. Running a number of different tests (and also bumping up the jdbc.MaxLimit setting from 60 to 100), I found that the "breaking point" for the server when using my test page was around 12-15 requests per second, which still gave sub-second response times. If pushed any further, the response times would quickly go up to several seconds.



I also ran some tests against a very barebones Apex page that shows nothing but a static HTML region.



When stress testing against this page, the server could handle up to around 25 requests per second while still maintaining sub-second response times. So obviously scalability depends on what you put on your pages.



Conclusions

Real-world performance will depend on a lot of factors. The test page I used was fairly typical for a business application in that it has a report, some dynamic PL/SQL content, and also does an insert into a table (so it's not just read-only). In a typical application, there will be pages that are more complex than this, and pages that are simpler than this, so it should average out.

So let's assume that the server can handle 15 requests per second with acceptable response times. What does that translate to in real-world terms? Here are some quick calculations (I'm assuming most Apex business apps will be used during an 8-hour office hour period each day):

  • 450 users doing 1 000 page views each in an 8-hour period per day (450*1000/8/60/60 = 15 requests per second)
  • 1 500 users doing 250 page views each in an 8-hour period per day (1500*250/8/60/60 = 13 requests per second)
  • 10 000 users doing 45 page views each in an 8-hour period per day (10000*45/8/60/60 = 15 requests per second)
  • 25 000 users doing 50 page views each in a 24-hour period per day (25000*50/24/60/60 = 14 requests per second)

Now remember, the above is based on a single server with 1 CPU and 1GB of RAM, running Oracle Express Edition (XE), Apache and Tomcat, and costing USD 10 per month! If this is not good value for money, I don't know what is...



Monday, July 13, 2015

Installing Oracle XE, ORDS and Apex on CentOS - Part Three: Additional configuration

This is part three in a series of blog posts about how to install Oracle 11g Express Edition (XE) with Oracle Application Express (Apex) on a CentOS Linux server, with Apex served by Oracle REST Data Services (ORDS) running on top of Tomcat and Apache.

By now, you should have a working installation of CentoOS with Oracle XE, Apex, ORDS, TomCat and Apache. There are still some additional configuration and optimization we should do, which we'll take a look at in this third installment of the blog post series.

This part is a "grab bag" of various tips. None of these steps are really required for a working installation, but most of them are strongly recommended for better security and performance.

Disabling root login

So far we have been using the root user to login to the server and perform administrative tasks. For various reasons it is better to disable logins for the root user and instead create an alternative user that can elevate its privileges to root level only when necessary.

This is explained in detail in this article.

Cleaning up Oracle XE

Let's clean up and disable/remove stuff we don't need from Oracle XE:


Optimizing Oracle XE

Let's change some default session, process and memory settings in Oracle XE:


Setting up Oracle XE network ACLs


If you plan to do any network calls from the database, for example to invoke web services, you need to explictly open the database Network Access Control List (ACL) to enable outgoing traffic to certain hosts/IP addresses and ports.

This is explained in detail in this article. Here is a sample script that can be used as a starting point:


Closing XDB port (8080) in the firewall


Remember to close port 8080 in the firewall, as it is not be needed in the default software stack we have set up. Revisit the firewall.sh script we created earlier and make sure that the line that opens port 8080 has been commented out or deleted from the firewall script. Re-run the script if necessary to make sure the change takes effect.


Optimizing ORDS


The default connection pool settings in the ORDS configuration are too small. You'll have to experiment to see what the best settings are for your workload, but the following seem to work well. Edit the /u01/ords/config/ords/conf/apex.xml file and put in the below JDBC settings. Restart Tomcat for the changes to take effect.



Cleaning up Tomcat


Let's remove the sample apps from the default Tomcat install, as we won't be needing any of it:



Cleaning up Apache


Let's remove the default Apache home page and replace it with our own. Do this by creating an index.html page on your local computer and then copy it to the correct folder on the server:



Optimizing Apache


Well, not so much optimizing Apache itself, but optimizing delivery of website content by making sure that the contents is being compressed and cached, which will speed up performance.

Actually, we already did this as part of the Apache installation (in part two of this blog post series). If you look back at the apex.conf file we created, it included configuration for the modules mod_deflate (for compression) and mod_expires (for caching).

You can verify that compression is enabled by looking at the headers returned from the server via Chrome's network tab (look for the Content-Encoding header):



Also, if you enable a two-line display of each request, Chrome will show the uncompressed file size as well as the compressed file size, so you can see how much bandwidth was saved.



You can verify that caching is enabled (for static files such as images and Javascript) by looking at the Expires header, which means that the browser should keep a copy of this file and not download it again until after it has expired:



I usually put an expiration of 7 days on static files. This means that typical business applications that are used daily or several times per week will always have a copy of the static files already downloaded, which speeds up things considerably. And if you do change the files (as part of an application upgrade), then it will take a maximum of one week before all your users have the latest files. (Tip: Users can always force a new download of cached files by holding down the Shift key and reloading the page in the browser.)

Chrome shows files retrieved from the cache as "(from cache)" and no time (0 ms) is spent downloading the file.



Adding robots.txt


Add a robots.txt file to the root of your website to avoid search engines hitting your website and wasting its resources and bandwidth. This assumes that you mostly have private (internal business) Apex applications that require logins, and since the search engine will not be able to login, we might as well tell the search engines to not even bother trying.


Here is a sample robots.txt that advises search engines to avoid certain folders. Note that the robots.txt file in no way blocks access, so don't try to use this to "secure" content.

If, on the other hand, you have applications with public pages that you would like to have indexed by search engines, then by all means change or skip the robots.txt file altogether. (Or look into the use of sitemaps to direct the search engine towards the important pages of your website.)


Setting up SSL certificates


Any website that requires authentication should be running via HTTPS to encrypt the communication between the client and the server. This means you need to install an SSL certificate.

For development environments, you could use a self-signed certificate, which is free. The disadvantage is that browsers will complain (some more than others) about the self-signed certificate being unsafe because it is "untrusted", but as long as you know who installed the certificate (you!), it's fine and you can ignore the warnings.

Installing mod_ssl and configuring a self-signed certificate is explained in this article. Here is a short version:



Modify the default configuration to disable weak/broken ciphers and protocols, and also to set up the port forwarding of the SSL port (443) to Tomcat/ORDS:



For production environments you need an SSL certificate issued by a Certificate Authority (CA). There are many CAs and lots of companies offering SSL certificates. I won't go into the details of buying a certificate here, but pick a cheap one -- there is really not much technical difference between the cheap certificates and the very expensive ones.

After you have set up the SSL certificate on your website, you should test that it works correctly by running the SSL test at ssllabs.com. You should aim for nothing less than a "Grade A" result. If you are using a self-signed certificate, the test will complain about the certificate not being trusted, but you still get the SSL settings of your site verified.



Setting up backups


There are two parts to this: Backing up the server itself, and backing up the database. The latter is the most important to me (as the operating system can always be reinstalled quite quickly, as this series of blog posts show, but the database of course contains the real value -- your data!).

A cloud server provider such as DigitalOcean offers some built-in mechanisms for backups and snapshots, as explained here.

Also, you could use the rsync command to set up your own backup of the OS files.

For backing up the database (schemas), see this blog post I wrote a couple of years ago about using PL/SQL to do Data Pump exports and transfer the files to Amazon's Simple Storage Service (S3).


Monitoring server uptime


Having spent all this effort to set up your website, you want to make sure it is up and available to your users. You should set up one or more automated uptime monitors, using a free service such as Pingdom or UptimeRobot. These services will hit your server from various locations around the world at regular intervals, and send you an email or SMS if the site is unavailable. They will also keep statistics of the average response times and the uptime percentage, which is useful to analyse your server performance over time.




Scaling up (or down) the server


At some point you may want to scale up your server by adding more CPUs, disk space, memory and/or bandwidth. If you are using a cloud service such as DigitalOcean, this is very easy to do. Simply shut down the server, select a new server size from the control panel, and restart the server. This is explained here, and the process will be similar for other virtual server environments.

Note that if you scale down the server and thereby reduce the amount of available memory, you might get an ORA-00845: MEMORY_TARGET not supported on this system error from Oracle and you might need to adjust the operating system memory settings as described in this article. The short version is as follows:


Keeping the server updated

You should keep your server up-to-date with the latest OS patches. You can manually run yum list updates to see what's available, and yum update to download and install all relevant operating system updates that are available. (On DigitalOcean, you will get a warning that says "grubby fatal error: unable to find a suitable template" that has to do with the way Linux kernels are managed in DigitalOcean. You can ignore the warning. If you need to update the kernel, this is done via the server control panel, as described here.)

To set up automatic updates, install the package yum-cron and configure it (described in more detail here).



Next steps

 
That concludes this collection of additional configuration. The next (and final) blog post in this series will deal with performance testing the new server to verify that it can handle an appropriate number of users.


Tuesday, July 7, 2015

Longer names coming to Oracle?

Have you ever, like me, been silently finger-counting the number of characters in a table name, column name or procedure name that you are about to create in the Oracle Database, to make sure it is short enough? It appears those days will soon (?) be at an end...

The current limit on names (identifiers such as table names, column names, procedure names, parameter names, etc) in Oracle SQL and PL/SQL is 30 characters. Here is what you see when you do a describe of the the user_tab_columns dictionary view in Oracle 11g:



But what do we see if we do the same on an Oracle 12c database? The database on apex.oracle.com was upgraded to 12c some time ago, so let's try the same there:


Interesting...! Looks like the new limit will be 128 characters. All the relevant views such as user_arguments, user_identifiers and user_objects have been likewise altered.

Mind you, as of version 12.1.0.2.0, it's still not possible to actually create objects with long names:



But it appears that the preparations have been done to allow longer names, so I'm guessing this will become possible in the next version (12.2?).

The world moves forward... it will be great to stop worrying about short names (but let's not go crazy and turn our PL/SQL into the equivalent of this silliness !).