Joined: 25 Jun 2016
|Posted: Thu Aug 24, 2017 1:44 pm Post subject: PostgreSQL - Using With VDO - Overview
|We are using PostgreSQL v9.5.3-1 x64 server on Windows 10. We are using the x64 ODBC driver which is offered during the installation. The download is available here: https://www.enterprisedb.com/downloads/postgres-postgresql-downloads#windows
We are programmatically creating the ODBC driver entry using all defaults instead of forcing the user to do it manually from the Windows Control Panel applet. Admin privileges are required to do this as registry entries are directly written. Database name, server hostname, user name and password, and port number for the server are required. In short, all login credentials.
We create the "empty" database in code using external calls to psql and some SQL scripts.
We have about 20 tables defined for our database at this time. I wrote a little program to generate 99.9% of the database interface code required by VDO to operate the strongly typed VirtualRecordsets and VirtualRecords based on querying the schema for table and column names, and data types. The queries that enable this are PostgreSQL-specific, the rest of the code is fairly generic. This took 2 part time days, and saved me a few weeks of tedious, boring work, which I refuse to do as I'm now semi-retired. This is what computers are for.
I can make some of this code available for the support community if there is any interest.
The VirtualConnection has these non-default properties:
this.virtualConnection1.ConnectionString = "DSN=PostgreSQL30;Database=myDBname;";
this.virtualConnection1.CursorLocation = ADODB.CursorLocationEnum.adUseClient;
this.virtualConnection1.Password = "myDbOwnerPwd";
this.virtualConnection1.UserID = "myDbOwnerName";
The PostgreSQL v9.5.0300 ODBC driver REQUIRES client side cursors to work properly.
I stressed the VDO/VT4 PostgreSQL system to see what it could do and what it couldn't do. I added 5 million records to one of my tables. If you entertain large datasets like this, forget about 32 bit.
We consider this a "worst case" scenario in terms of speed and efficiency. The records added at the rate of between 100 and 400 records per second.
You should bracket your database record adds, deletes, and updates with transactional control. If you don't, and your program crashes during the operation, you can count on PostgreSQL having some corruption in the database that may affect the number of records returned to a recordset query. Good luck with that.
Keep this in perspective --- these are figures for 5 MILLION records in a recordset with client side cursors. Computer is gaming laptop, 2.4-3.4GHz, 16GB memory.
Our client side cursors required 2 GB of memory to support this recordset. The recordset query executed and VirtualTree loaded in 40 seconds and all 5 million records were accessible. An extra 2 GB was allocated to support the reverse sort based on clicking on the VT header column. It takes 11 seconds to travel directly from tree top to tree bottom (Ctl-Home, Ctl-End), or to "page" the scroll up or down the recordset. It takes 90 seconds to perform the reverse sort and display it (first time only, 60 seconds additional reverse sorts). It takes 10 seconds to reverse back to ascending sort. Obviously some caching going on here.
You can probably proportion the timing figures for smaller recordsets. For normal daily usage, I expect delays in the UI to be imperceptible.
Just FYI. Hope this helps someone out there.
-- Mike R.