Monday, January 30, 2012

The Art of Self-preservation

This post is aimed at all those poor souls that find themselves facing Oracle ADF coming from a PL/SQL background (Forms, APEX or plain old database development).
Normal First Reactions
Despair. Most of us have felt it in the beginning, and it is considered perfectly normal. The feeling of constant despair will diminish slightly over time, but relapses will be frequent.

You will feel as if you have stepped 20 odd years back in time; yes, it has to be this hard to make even the simplest CRUD application.

You will have to pass some fairly elevated hurdles; yes, you have to learn (or at least have some knowledge of) Java, JSF, Expression Language, Groovy, CSS, HTML, Javascript and last but absolutely not least ADF itself.

Start in the Right End
You (or more likely your superior) bought the sales pitch, and thought you arrived to a mature, complete development framework specifically suited to leverage your existing Oracle investments. What you got was the biggest jigsaw puzzle ever, and all the corners missing. To even attempt fiddling around with the pieces, you have to have some basic skills.

ADF is a Java framework (although Java developers would call it xml-based). Java is core. You have to learn the stuff. You will not get far without Java. It is that simple. After all, the kids all learn it at school these days, how hard can it be? The answer is like always; 10% talent, 90% practice, it will take time to be good at it.

As a starting point, I recommend Head First Java. Not only is it educational, it will also help you to realize just how bad all other introductory programming books actually are.

The Next Step
As Java frameworks go, ADF is pretty big. Full JEE stack with lots of bells and whistles, and an abundance of XML-files. The good news is that you can achieve some pretty sophisticated behavior by declaration, the bad news is that you have to know where to turn the knobs. There are a lot of knobs.

First you have to learn the grand picture of the framework;
  • The model, aka the meat, or business components
  • The other model. Err? This is the binding layer, or view backend model or some such
  • The controller thingy with task flows
  • The view, as in GUI, and not to be confused with the view object in the first model. Argh...
Grant Ronald will help you on the way with his book Quick Start Guide to Oracle Fusion Development: Oracle JDeveloper and Oracle ADF. Do not let the lengthy title put you off, Mr. Ronald understands where you are coming from, and will gently nudge you in the right direction.

Ready Yet?
Well, that depends. Do you want to create tutorial applications, or the real thing? You should not balk at tutorials, that is not what I am saying. It is just that real business users have never had any clue how to best work with <insert technology/framework here>, they just have needs (of which, you are of course grateful, as it gives you work).

Users do not want to have plain CRUD anymore, they want to have an extremely functional and visually appealing working environment delivered at lightning speed. Business users will push you to the edge, they are both educated within their domain, and most likely used to some seriously fast client server applications.

Nervous?

Relax...
Stay sane just a bit longer. Take a deep breath, there will eventually be a light at the end of the tunnel. Let us hope it is not an oncoming train ;-)

Shay Shmeltzer has a more elaborate (and infinitely better) approach on how to get started with ADF in this blog post.

To be continued...

Thursday, January 26, 2012

Updated Statement of Direction

...Of sorts. Actually, I did not even know I had one until now. But here goes:
My long absence from this blog are based on all the right reasons. Family. Life. You know, the things that matters. And some reasons slightly off. Like ADF. Yup, Oracle Application Development Framework. Just writing it out loud like that is probably going to make Dimitri's blog feed processor choke all over again, and get this blog instantly blacklisted from anything slightly PL/SQL related for all time.

The Old Road
My fascination with APEX has not diminished by far, and I hope to revisit extremelyproductivefundevelopingsmoothookingdatacentricapplications-land in the future. But for now, the "other" framework is more than enough.

The New Direction
Being a consultant will make you do strange things from time to time. This is one of those. Am I a Java-programmer? No. Do I know JSF? No. Am I a Groovy programmer? No. Am I at least proficient at Expression Language? No. How about ADF-skills? Well, some. I guess. Luckily, Oracle says I only need a week, or at least a day to become an expert. Hmm. I had to pass on OOW last year, due to a project using ADF... Catch 22 all over again.

I cannot say I am that sorry, actually. Learning new stuff (i.e. banging my head against a wall), and really trying to understand it (i.e. repeatedly), is part of my nature.

It was a professional change long overdue.

The Blog
I will continue to post my progress with ADF (or lack thereof) here, feel free to divert this blog from your favorite feed reader... Oh, and some APEX stuff will probably still make it here from time to time.

You can go wash your eyes now :D

Saturday, October 22, 2011

Install, Rinse, Repeat

Every time I start installing Oracle on Linux I always end up with the same dark thoughts: Why does it have to be so hard! An the worst part is: It isn't! But let me be precise with the last statement: It isn't hard if you know how! But in the beginning, you don't, so it is... You get the picture.

On my last endeavor to create a shiny new VirtualBox machine, I decided to take notes. Here is what I aimed at:

Yes, both Apache httpd and Apache Tomcat present. I needed a versatile development vm, with a known (to me) web server, and a known (again, to me) web container working with APEX, the APEX Listener and Oracle 11g XE database.

There will be typos, there will probably most likely of course be things I have forgotten to write down. On the whole, I hope it will save you some grey hairs (or in my case, loss of gray hairs :-))

Beware, the iframe controls will not be very responsive before the document has completed loading, and the document is quite large. Go for the direct access if it gets too troublesome...

Direct access to the Google doc is here: https://docs.google.com/document/pub?id=19TZ5Yqa-W6edWvPhWtH9d0D3iPEKFkGwckncUDKCaIY

Here goes:



Friday, October 21, 2011

Updated Hotkeys for APEX 4.1

I finally got around to updating the hotkeys userscript for Application Express 4.1. There were some changes in the APEX Builder html that made an update necessary.
Cool image by Josef Stuefer, available by CC BY 2.0
I have to say, that after I started using shortcut keys in the builder I never looked back. These days I get confused when developing in a browser where the script is not installed.

The script can be downloaded here: http://userscripts.org/scripts/show/115495

Just click the big green Install-button, and you can start using it. The script is tested with Google Chrome, but should work equally well with Firefox.

The keys are the same old:

  • F8: Run Page
  • F9: Apply Changes
  • Alt + F9: Clicks the highlighted button
  • Shift + F9: Cancel
  • Alt + PageDown: Next
  • Alt + PageUp: Previous
  • Ctrl + Shift + S: Shared Components
  • Ctrl + Shift + E: Edit Page
  • Ctrl + Shift + Z: Toggle return to page on/off


If you disagree with the key mapping; then change them! The source is there for the changing :-)

If there are any grievous errors, then do not hesitate to call...

The old hotkeys userscript for APEX 4.0 is still available here: http://userscripts.org/scripts/show/81058

Thursday, October 20, 2011

Multiple File Upload with jQuery and APEX Listener

Did you ever want to do this?


Envious of the fancy multiple image upload functionality I saw from Google+, I set out to do the same. This is more of a log of observations than a full recipe. There is no APEX involved but the listener. To follow this you have to have an APEX Listener v1.1 (or higher) up and running against an Oracle 10g (or higher) database. This will not work with Internet Explorer or Opera (not multiple files), I have only tested on Google Chrome.

jQuery to the Rescue
jQuery has a lot (capital L type of lot) of plugins, one really nice I stumbled upon was Sebastian Tschan's jQuery-file-upload. Demo is here, download is here, documentation is here.

Setup Database Objects
The images has to go somewhere, and I created an images table in the hr-schema:
   --drop table images cascade constraints;
   --drop sequence images_pk_seq;

   create table images (
      image_id        number,
      content_type    varchar2(255),
      filename        varchar2(4000),
      image           blob,
      constraint img_pk primary key (image_id) using index)
   /

   create sequence images_pk_seq;

   create or replace trigger trg_images_bi 
   before insert on images 
   referencing new as new old as old for each row
   declare
      l_new_id number;
   begin
      if :new.image_id is null then
         select images_pk_seq.nextval into l_new_id from dual;
         :new.image_id := l_new_id;
      end if;
   exception
   when others then
      raise;
   end trg_images_bi;
   /

   alter trigger trg_images_bi enable;

   grant all on hr.images to APEX_PUBLIC_USER;
You have to grant insert/delete at the least to the user defined in the listener (normally APEX_PUBLIC_USER).

Setup Imagehandler Resource Template
This is where the actual data manipulation is going to happen. You can import all resource templates in this example by importing this file: resources.zip.

Log into listenerAdmin (http://myhost/apex/listenerAdmin).
  • Click Resource Template
  • Click Add Resource Template
  • Click Add Handler, and choose method POST
  • The Post handler will handle all uploads and inserts
  • Click Add Parameter, and set the following: status, X-APEX-STATUS-CODE, Header, Out, Integer
  • Click Add Parameter, and set the following: location, X-APEX-FORWARD, Header, Out, String
  • Click Add Parameter, and set the following: file, filename, Header, In, String (This should actually map to the filename, and be inserted with the rest of the data. For some reason, this does not work, and I'm too lazy to find out why...)
  • Add the data as seen below, and click Save



   declare
      l_image_id number;
   begin
      insert into hr.images (content_type, image, filename) values (:contentType, :body, :file) returning image_id into l_image_id;
      commit;
      /* This status will actually not be seen, as we have to redirect to get control of response */
      :status := 201;
      :location := 'imagehandler/response/'||l_image_id;
   end;
  • Click the imagehandler resource template
  • Click Add Handler
  • Choose method DELETE
  • Click Add Parameter, and set the following: imageid, <blank>, URI, IN, Integer
  • Click Add Parameter, and set the following: status, X-APEX-STATUS-CODE, Header, OUT, Integer
  • Add the data as seen below, click Save

   begin
      delete from hr.images where image_id = :imageid; 
      commit;
      :status := 200;
   end;
  • Click the imagehandler resource template
  • Click Add Parameter, and set the following: imageid, <blank>, URI, IN, Integer
  • Click the GET tab Add the code as seen below, click Save

   select content_type, image
   from   hr.images
   where  image_id = :imageid

Setup Imagehandler Response Resource Template
I have yet to find a way to get a PL/SQL Block resource to respond with a json array, only single json attributes. It is a shame really, giving control of the response to the developer would have helped a lot. As it stands now, the client will not get a proper response (201 - created), and we have to add an extra resource template. If there is a way, please do tell, the docs gives nada here.

This jQuery plugin requires a json array, and in order to do that, we have to cheat.
  • Click Add Resource
  • Click Add Parameter, and set the following: imageid, <blank>, URI, IN, Integer
  • Template Add the code as seen below, click Save

   select 'application/json','[{"name":"'||:imageid||'.jpg","size":'||dbms_lob.getlength(img.image)||',"url":"\/apex\/imagehandler\/'||image_id||'","thumbnail_url":"\/apex\/imagehandler\/'||image_id||'","delete_url":"\/apex\/imagehandler/'||image_id||'","delete_type":"DELETE"}]'
   from hr.images img
   where img.image_id = :imageid
Looks a bit greek, I know. You can look at the server response for more details, but basically you are returning file name, download url, thumbnail url and delete uri/method in a json array.

Setup jQuery Plugin
Nothing to it really, download and unzip the archive somewhere reachable on a web server or appserver. In this example I unzipped and renamed the folder so that I could reach the example/index.html on http://myhost/js/multiupload/example/index.html.

You have to make one change in the example/index.html to make it work: Point the form action to your image handler:
<form action="/apex/imagehandler/0" method="POST" enctype="multipart/form-data"> 
Notice how we have to add {imageid} at the end, parameters are not optional (at least not in this version of the listener), so if it is left out, it will not match the resource templates URI pattern. 

Ready for a test run! 

Some notes on working with the Application Express Listener
  • The documentation is a bit lacking (to say the least)
  • Debugging is hard, 500 server error does not say much
This is a very simplistic example, there are no security involved, and it is not embedded into an APEX page. Anyhow,  as a quick example, it (barely) serves the purpose.


Monday, November 8, 2010

Refresh Report Region and Pagination

When using the refresh action to refresh reports in Oracle APEX 4.0, you inevitably will encounter the problem of lost pagination. The refresh action does really refresh, regardless of which page of the report the user was currently watching.

It Just Won't Tell Me
APEX knows the current pagination of a report. If you go to another page and back, or just press F5, you will see that APEX knows which page of your report you were on. Where can I get that user data? I have looked at the documented API's, and the closest thing I can see, is the apex_application.g_flow_current_min_row. The problem is that this is only available at render time and maps to pg_min_row request value. Otherwise, I only get the value 1 (first row).

It knows, but won't tell me!

The Dirty Approach
If there is a will, there is a way. I looked into the APEX tables, and especially wwv_flow_data. If you look, you will see a record containing the user session instance and report region id. Item value will be a colon separated value starting with the current min row, followed by current max rows (probably). That looks useful, but you need to keep your head about this. The value column is a CLOB data type, and a pretty special format, so if you attempt to use that data, be prepared to do a rewrite next time you patch your APEX installation.

If you notice in your pagination scheme, the pagination link executes a javascript called $a_report_Split (basically a wrapper for old (but rewritten) $a_report). So if I could use wwv_flow_data to get the second parameter of $a_report_Split right (<pMin>_<pMax>_<pFetched>), then I would be good to go..? For a while, anyway. $a_report_Split is not a documented API javascript function, and may also be subject to change in the upcoming releases

The Dirty Solution
Access to wwv_flow_data directly is not something you want to do in a normal APEX installation. Creating a very single minded function in a privileged schema (granted select on wwv_flow_data) and granting access to the function to my application schema was the best I could come up with.
create or replace function <privileged schema user>.get_current_pagination (    
         p_region_id    in       number
      ) return varchar2 is

         l_ret          varchar2(255);
         l_session_id   number := v('SESSION'); -- setting directly prevents unauthorized use
         l_value_tab    apex_application_global.vc_arr2;

         cursor c_region (
            b_session_id      in       number
         ,  b_region_id       in       number
         ) is 
            select item_value
              from apex_040000.wwv_flow_data
            where  flow_instance = b_session_id
              and  item_id = b_region_id;
      begin
         for r in c_region(l_session_id,p_region_id)
         loop
            l_value_tab := apex_util.string_to_table(r.item_value,':');
         end loop;

         if l_value_tab.exists(1)
         then
            l_ret := l_value_tab(1) || '_' || l_value_tab(2) || '_' || l_value_tab(2);
         end if;
         
         return l_ret;
      end;                            
      /

      grant execute on <privileged schema user>.get_current_pagination to <application schema user>;
That taken care of, the next thing is to get that value and use it in a call to $a_report_Split.

Follow these steps in your APEX builder:
  • Create two hidden page items called P1_REPORT_REGION_ID and P1_REPORT_SPLIT_PARAMS
  • Create a dynamic action to be called when you would like your report to be refreshed
  • Create three true actions:
  • First action to set P_REPORT_REGION_ID with the id of the report region:
    Action: SetValue, Type: Javascript Expression, Javascript Expression: $('#<report region static id>').closest('div').attr('id').replace(/[^0-9]/g,'').
    (If you set this item value as a computation before header in APEX, you can skip this action)

  • Second action is to set P1_REPORT_SPLIT_PARAMS with values corresponding to the current report page:
    Action: SetValue, Type: PL/SQL Function Body, PL/SQL Function Body: return <privileged schema user>.get_current_pagination(:p1_report_region_id);

  • Third action is to call the $a_report_Split javascript function:
    Action: Execute Javascript Code, Code: $a_report_Split($v('P1_REPORT_REGION_ID'),$v('P1_REPORT_SPLIT_PARAMS'));


    That should be it.

    Naturally, I have cannot create a demo page on apex.oracle.com for this, as access to wwv_flow_data is somewhat scarce in that environment (and rightfully so).

    Why Didn't He Just...
    If you know of a better way to do this (and better than refreshing the whole page :-)), then please let me know! And if it is possible through documented API's, that would be grand.

    If this is the closest I get, and any of the APEX team is reading this, then please consider this an enhancement request :-P

    For future reference: Oracle Application Express 4.0.1.00.03

    Friday, November 5, 2010

    Paranoia 101 - or protecting your investment

    In corporate application development, a versioning and backup strategy is pretty mandatory. But what about all those small personal projects? The only thing you can be really certain of when using a laptop/desktop for your pet projects, is that eventually your machine is going to die. How much work gets lost when it does?

    I use SVN for version control as a rule. I also use the APEX export and splitter utility to capture my local Oracle APEX applications into subversion repositories. Sometimes I even remember to put some of my more important repositories on Amazon S3 for safe storage. The APEX export is (very) loosely based on a blog post by John Scott back in 2006.

    A word of cation: This just my naive approach, use it at your own risk! Be sure to test whether a restore actually works, equally important as the actual backing up :-)

    The Start of an "Automated Approach"
    While the manual steps described above will get you there, it is also a bit more work than I appreciate. In this post I will describe a more automated approach to ease your local APEX application backups (and anything else you put in your svn repository as well). A bit of pain to set it up, but once there, it is all plain sailing.

    I do my development in Windows mainly, so all OS specifics described below are set in that context. This approach is viable for your favorite nix OS also. A tweaking of the batch scripts should do it.

    Subversion
    You need a command line tool to manipulate subversion repositories, download Subversion from http://subversion.apache.org/packages.html. I currently use the one from CollabNet. It should not matter much which one you choose. There are a number of online resources on how to install svn and create repositories (yes, even for Windows...).

    Include the subversion bin directory to your path environment variable. After installation you should be able to execute svn commands from a cmd window:
    svn help
    svnadmin help

    APEX Export Utility
    This comes with the APEX installation files, and has the ability to extract single APEX applications (or whole instances) from a command line. An APEX splitter tool is also included, which together with a subversion repository enables you to track all the tiny changes in your applications.

    In Oracle APEX 4.0, it accepts more modern Oracle jdbc libraries, in APEX 3.2 you had to locate classes12.jar (predates even the jdbc libraries that comes ships with Oracle XE). You can download all jdbc libraries here: http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html

    Since technetwork is constantly shifting about: If the link does not work, try to locate the Software Download Index, and search for JDBC.

    Amazon S3
    You need to set up an Amazon account before you can use Amazon Simple Storage Service (S3). If you have ever ordered books from Amazon, you probably already have an account, but it may be necessary to confirm payment methods.

    Storage prices are not that high (at the moment). My reasoning is that I will suffer more if I don't backup my stuff to safe storage, than a tiny monthly bill could ever make me.

    In order for this to work, you have to obtain an Acceskey:
    • Log on to http://aws.amazon.com/account/
    • Go to Security Credentials
    • Go to Access Credentials
    • If you do not have any previously generated accesskeys, then click Create a new Access Key
    • Copy both Access Key ID, and Secret Access Key

    Beware to keep this information to yourself.

    Create a bucket where you would like your backups to be stored, and take a note of the name.

    The thing with Amazon S3, is its interfaces (REST style HTTP or SOAP) and storage objects organized in buckets. This basically means that you will require a third party tool (or do some serious programming yourself) to easily use the service.

    JetS3t - Synchronize
    I settled for JetS3t. I will not vouch for this tool in any way. I do not know who develops it, I do not know if it sends my credentials to other people or companies. I basically just hope and pray.

    When credentials potentially can be exploited in a very direct manner to get at your most precious, and even conducting illegal activities on your behalf, it is time to draw a deep breath. Maybe uploading your file manually wasn't such a bad idea anyway? As I said, I just hope and prey. I hope "someone" will scan the source code, and "someone" will scream if anomalies appears. I hope.

    I use a part of JetS3t called Synchronize. It has a very easy cli that suites my need for an automated backup process. Installation is just unzipping, but you have to tweak the properties a bit for it to work.

    These are the changes I made in jets3t\configs\jets3t.properties-file:
    s3service.default-bucket-location=EU (well, bacause it's the easiest to me)

    And these are the changes in jets3t\configs\synchronize.properties
    accesskey=<your accesskey>
    secretkey=<your secret accesskey>

    You can also point Synchronize to an encrypted file containing your access keys using the --credentials switch.

    7-Zip
    To zip my svn repository dumps, I use 7-Zip. Very easy, straight forward open source zipping tool with a command line interface. Make sure you can reach 7z from a command prompt before proceeding, else you have to modify the batch scripts below and add the path.

    Java
    Make sure you have java 1.5 or higher installed in you system, and that you can write java -version from a command prompt.

    Batch scripts
    This is where it gets interesting.

    I start off by creating a directory where my backup scripts will reside. It will also serve as a staging area before files are uploaded to Amazon S3. This is the current structure I have:
    • backup
      • Apex (I need this directory for temporary check out/in)
      • oracle (this is where I copy the export/splitter utilities from <apex install dir>/apex/utilities/oracle)
        • apex
          • APEXExport.class
          • APEXExportSplitter.class
      • classes12.jar (jdbc drivers)

    I then place four backup scripts directly into the backup directory.

    backup_apex_app.bat
    This script is the one responsible for checking out the APEX files from svn, extracting the current version of the APEX applications from the database, and checking it into svn again. It takes a parameter (%1%) containing APEX application id. All other necessary parameters will be set in the master.bat below.
    REM ******** Start Backup APEX apps ************
    set APEX_APP_ID=%1%
    
    REM ** svn checkout
    svn checkout file:///%APEX_REPOS_PATH% %APEX_SPLITTER_DIR%
    
    REM ** Export APEX applications
    set CLASSPATH=%CLASSPATH%;%APEX_BACKUP_DIR%\classes12.jar;
    java oracle.apex.APEXExport -db %DB_JDBC_CONN_STR% -user %DB_USER% -password %DB_USER_PWD% -applicationid %APEX_APP_ID%
    java oracle.apex.APEXExportSplitter f%APEX_APP_ID%.sql
    
    REM ** Copy files to svn directory
    xcopy %APEX_BACKUP_DIR%\f%APEX_APP_ID%.sql %APEX_SPLITTER_DIR% /Y
    xcopy %APEX_BACKUP_DIR%\f%APEX_APP_ID% %APEX_SPLITTER_DIR%\f%APEX_APP_ID% /S /E /Y
    
    REM ** Remove superflous files
    rmdir %APEX_BACKUP_DIR%\f%APEX_APP_ID%\ /s /q
    del %APEX_BACKUP_DIR%\f%APEX_APP_ID%.sql
    
    REM ** Add and check in files to repository
    cd %APEX_SPLITTER_DIR%
    svn add * --force
    svn ci -m "Automated backup"
    cd %APEX_BACKUP_DIR%
    

    backup_svn_repos.bat
    This is the script responsible for dumping the svn repository and zipping the dump file. I suppose this could have been done with HOTCOPY as well. Old habits and such... You can include filters here to extract only the project you are interested in. All necessary parameters will be set from the master.bat script below.
    REM ******** Start dumping and zipping repository ************
    
    REM ** Remving old dumpfile
    del %APEX_BACKUP_DIR%\%APEX_REPOS_DUMPFILE%.zip
    
    REM ** Dumping svn repository
    svnadmin dump %APEX_REPOS% > %APEX_BACKUP_DIR%\%APEX_REPOS_DUMPFILE%.dump
    
    REM ** zipping repository dump
    7z.exe a %APEX_REPOS_DUMPFILE%.zip %APEX_REPOS_DUMPFILE%.dump
    
    REM ** Removing old dump file
    del %APEX_BACKUP_DIR%\%APEX_REPOS_DUMPFILE%.dump
    

    backup_svn_to_s3.bat
    This script is responsible for sending the zipped repository dump to Amazon S3.
    REM ******** Backup SVN repository to S3 ************
    
    REM ** Synchronizing svn repository dumpfile
    %JETS3T_PATH%\bin\synchronize.bat UP %BUCKET_NAME%/%BUCKET_PATH% %APEX_BACKUP_DIR%\%APEX_REPOS_DUMPFILE%.zip
    

    master.bat
    This is the script which ties it all together. If you want to test the separate scripts, just comment out the scripts (rem) you don't want to call. Chances are there is going to be some issues with the parameters the first run, so testing in small stages is key when debugging (it produces quite a bit of output during a normal run).

    In this script it is necessary for you to adjust all parameters to your own environment. Example values are included in the sample below.

    rem *********** Setting environment parameters *******************
    
    rem ** backup directories
    set APEX_BACKUP_DIR=<your backup dir, like c:\development\backup>
    set APEX_SPLITTER_DIR=%APEX_BACKUP_DIR%\apex
    
    rem ** path to apex application directory inside svn repository
    set APEX_REPOS=<the actual path of your svn repository, like c:\SVN\myapprepos>
    set APEX_REPOS_PATH=<Full path to your APEX folder inside your svn repository, note the forward slashes, like c:/SVN/myapprepos/trunc/apex>
    set APEX_REPOS_DUMPFILE=<name of your repository dump file, like svn_repos, without post-fix/file type>
    
    rem ** connect string format host:port:sid
    set DB_JDBC_CONN_STR=<jdbc connect string, like localhost:1521:orcl3>
    set DB_USER=<username, like scott>
    set DB_USER_PWD=<password, like tiger>
    
    rem ** S3
    set JETS3T_PATH=<full path to your jets3t unzip dir, like C:\development\backup\jets3t>
    set BUCKET_NAME=<name of the Amazon S3 bucket you created earlier, like mysafebackup.backup.svn>
    set BUCKET_PATH=<name of path inside bucket, like svn_repos>
    
    rem ************ Calling batch files ************************
    call backup_apex_app.bat <application id>
    call backup_apex_app.bat <other application id>
    call backup_svn_repos.bat
    call backup_svn_to_s3.bat
    
    pause
    

    Good to Go!?!
    The only thing remaining to fully automated backups, is to schedule a regular execution of the script through the scheduler.

    I know I felt a whole deal better after I started using this backup strategy :-)