8. DBA Issues


8.1 Performance - make it go faster!

From: alan@po.den.mmc.com (Alan Popiel), regarding a performance problem with a 30,000 row indexed table:

30,000 is not really very many rows; your client should not be having problems with such a table size unless the rows themselves are huge, or the machine itself is small and weak. However, the following should help:

  1. Perform "UPDATE STATISTICS ON tablename". This is the easiest, and it may help performance. [see 6.10 for why]
  2. DROP INDEX idxname; CREATE INDEX idxname ON tablename ( cola, colb, ... ); Dropping and recreating the index will improve index contiguity, subject to operating system / file system limitations. Medium cost to do; must be DBA or have index privilege on "tablename".
  3. ALTER INDEX idxname TO CLUSTER; This will recreate the table and order the rows by index value. It forces logical contiguity, and improves physical contiguity, again subject to OS / FS limitations. This has the highest cost to do, but should provide the most benefit. However, you can only have one cluster index per table, and reordering the table rows may impact some OTHER query using some other index. You must have DBA privilege or alter table privilege on "tablename" to do this.

    Jerry M. Denman (jerry@sherwood.com) adds: Be aware that you must have enough space to create a duplicate copy of the table when running the cluster index. It copies the table to a temporary table when ordering the rows and then drops the original and renames the temporary table to the same name as the original. Also, if the table has a large number of insert/delete statements, then the benefits of clustering are soon gone.


8.2 Should I use Online DSA mirroring?

Informix 6.0 Online introduces mirroring, which allows replication of data on multiple disks. If one disk dies Online can continue to run using a remaining mirrored disk.

Some operating systems also provide mirroring, and so does some hardware, so which is best?

Joe Lumbley succinctly states:

Hardware mirroring is usually best, as it's faster. Next is HP-UX. Last comes OnLine mirroring. OnLine mirroring contains a little bit of logic regarding how to handle chunks that are down, but with any luck the HPUX or hardware mirroring will never let OnLine see that situation anyway.

Johnathan Leffler explains further:

I simply haven't yet heard a convincing explanation of why our software can do it better than the O/S can.

It does, of course, depend on the mirroring support from the O/S, and especially on a multi-CPU machine where the O/S I/O's are handled by a single CPU under the native mirroring system, DSA could have an advantage if it has multiple threads handling the writes in parallel. But I'm not convinced that O/S mirroring is that bad. It also depends on the intelligence or otherwise of the disk controllers. Etc.

Unless the O/S has screwed up badly, I don't think that the Informix mirroring provides much (if any) advantage.

I don't have any concrete evidence either way, and it is very difficult to determine experimentally. I know that there were once plans at one time to spend a day or two assessing the effect of LVMs (logical volume managers) on the performance of OnLine. I also know that it didn't happen -- I hope it was in part because I pointed out to the person who was asked to do the test that controlling the parameters of the test was going to be difficult, and was going to need considerably more than a day or two simply to work out what to test and how, independently of the time taken to create and load suitable data sets (mainly large ones) under multiple different configurations with differing amounts of RAID-ness, different numbers of controllers, different places where the mirroring occurs, different numbers of CPUs, different numbers of AIO threads, striping, etc.

So, yes, I think maybe you are being lead astray by listening to Informix marketing talk.

One of the claimed advantages for DSA disk handling is that it can selectively read from either the primary or the secondary of a mirrored pair -- so can the O/S mirrored systems, and here is evidence that at least one does precisely that:

From: johnbr@atl.hp.com (John Bria)

HP's Logical Volume Manager will allow you to "stripe" non-array disks by creating extents on a rotating basis across multiple drives. This may/may not be advantageous as you develop a fragmentation strategy.

If you use HP-UX mirroring, reads will be routed to the mirror copy if the primary is busy. Under heavy disk loads, this is very advantageous.


8.3 Questions about NFS mounted databases

Because people regularly ask about using Network File Systems (NFS) here's an edited version of James Holthaus' summary:
  1. Can we run applications (a C program, shell scripts, perl scripts, etc) that are on the remote computer? What kind of performance can be expected?
  2. What are the software requirements to get NFS to work? We have TCP/IP, but not ODT.
  3. How well does Informix work with NFS? Do we need to get I-Net or some such, or can we just mount the remote file system and pretend it is local?
  4. How difficult to administer is NFS?
  5. On the whole, what is your opinion of NFS?

8.4 How can I run ontape/tbtape from a cron job?

echo '\n0' | tbtape -s 1 | head -100

The head cuts things off when tbtape gets into the infinite "insert a tape and press any key to continue" thing. I have found that normal output is well less than 100 lines. This trick also seems to kill off the loose tbtape process too - but I'm not sure why. Of course, YMMV.

Martin Andrews (andrewm@ccfadm.eeg.ccf.org)

Note that recent releases of Informix engines have renamed the "tb*" utilities to "on*"


8.5 Calculating extent sizes

A sweet couple of shells to perform this task appear in APPENDIX K Calculating extent sizes

8.6 How can I list a table's dbspace?

Here are some queries which might be of use:
   
   SELECT TRUNC(partnum/16777216) dbspace,
          COUNT(*) tables, SUM(nrows) tot_rows,
          SUM(nrows*rowsize) bytes
     FROM systables
    WHERE tabtype = 'T'
    GROUP BY 1
    ORDER BY 1;
If you add the 'dbspaces' table to your database and load it with dbspace names taken from tbstat -D output as I have done, then you can use:
   
   SELECT dbs_name[1,12] dbspace,
          COUNT(*) tables, SUM(nrows) tot_rows,
          SUM(nrows*rowsize) bytes
     FROM systables, dbspaces 
    WHERE tabtype = 'T'
      AND dbs_no = trunc(partnum/16777216)  
    GROUP BY 1
    ORDER BY 1;
Sample output:
   
   dbspace           tables         tot_rows            bytes 
   
   mcs_aaaaa             28               51             3715
   mcs_catalog           22             2695           114810
   mcs_eeeee             25              224            45446
   mcs_fffff             32             1412           201445
   mcs_mmmmm             35              165           262599
   mcs_wwwww             28              449            79385
   
   ("bytes" is data bytes, and does not include indexes and other overhead.)
I separated the mcs system catalog files from the data tables by doing: create database mcs in mcs_catalog;

I created the data tables in the other five dbspaces by doing:

   create table whatever ( ... ) in mcs_xxxxx;
Other than the trick to get the dbspace number (thanks to Joe Glidden and others), this is all pretty straight-forward stuff. However, I hope my posting it may save someone some time.

alan@po.den.mmc.com (Alan Popiel)


8.7 How should I use the "informix" user & group?

cpilot@teleport.com (Richard Shannon) writes:
  1. Do you recommend using the Informix account to manage the database. ie. granting/revoking permissions, adding/dropping tables, dbload, etc.
  2. Would it be better to use a separate administrative account for this? Why?
  3. What are the pros/cons/issues when using Informix as owner and group of all database applications and having all users being members of that group?

We use the informix account ONLY for dba tasks, and not for all of those. You should have an informix account and informix group, both of these distinct from all other accounts. No one but user informix in group informix. If you're using SE, then informix doesn't even have to have a password, as you can do all you need to from the root account. In OnLine, informix has to do some things from the command line.

We have an "application" account which own all the non-system tables. It sets permissions on those tables, owns all the source, data, and executable directories and code. This keeps separate the functions of the data administrator and the engine administrator. We have a distinct dba function (and a person to do it.) Even if you don't yet, you can benefit from planning to be big if you EVER MIGHT get big.

There are grave security issues at stake when you start deviating from these guidelines. Having users in the informix group gives them the power to do things inside the INFORMIXDIR that you don't want. There are no pros worth the risks of having all users members of group informix.

We wrote a set of scripts which handle the tasks of setting, showing, revoking permissions for a list of tables for a given user. They are just loops which echo "revoke all on table tabname for user username" (or whatever) to isql. You can also manually diddle the systabauth table (as informix) to get user permissions set, but that's pretty manly.

Clem Akins (cwakins@leia.alloys.rmc.com)


8.8 How can I measure CPU time?

Ignacio Bisso (ignacio@sunatxx.gob.pe):

Usually, in Unix, the command used to measure CPU time for a process is:

$ /bin/time test.4ge
real        9.0
user        1.6
sys         1.4
In this case, 1.6 + 1.4 is the total CPU time in seconds consumed by the program test.4ge. The 'real' time is not important.

However, in an Informix enviroment, the 'time' command is hiding something very important: The CPU time of the sqlturbo process. We know that in some situations (i.e. when we use a lot of stored procedures) the sqlturbo CPU time may be greater than the application CPU time.

In order to have a reliable CPU time we have to add the sqlturbo time to the application's time. Moreover, if we use a RUN sentence in the application, we have to add the CPU time of the commands triggered by the RUN statement.

Some OS (SCO Unix, Unix SVR4) have a command called 'timex'. This command can obtain the CPU time of a process and its children. Before we use 'timex' command we have to turn the accounting on.

To turn on the accounting, login as root and run:

 OS         COMMAND
 --------   -------------------------------------
 SCO Unix   $ /usr/lib/acct/accton /usr/adm/pacct
 Unix SVR4  $ /usr/lib/acct/accton /var/adm/pacct
 Others OS  $ man acct
When the accounting is on, you can execute the command 'timex'. Use the -p option (in order to obtain the children processes CPU time).
$ timex -p fglgo test.4gi
                                                         ******** 
COMMAND                      START    END          REAL  * CPU  * CHARS  BLOCKS
NAME       USER     TTYNAME  TIME     TIME       (SECS)  *(SECS)*TRNSFD   R/W
fglgo      ignacio  ttyp0    17:35:05 17:35:05     0.59  * 3.02 * 26256     9
#sqlturbo  ignacio  ttyp0    17:35:05 17:35:05     0.56  * 4.26 * 32248    12
                                                         ********
'timex' output has two rows: one for the application and one for the sqlturbo. If you use a RUN statement in the application, then the 'timex' output will have a third row for the command triggered by the RUN statement.

In order to obtain the total CPU time used, you must add the times in the CPU column.

Total_CPU_time = fglgo_time + sqlturbo_time = 3.02 secs + 4.26 secs ==>

Total_CPU_time = 7.28 secs

I use 'timex' when I want to compare the execution time of two versions of the same program. If you want to optimize a 4GL program (maybe using an insert cursor, or putting some stored procedures) 'timex' will tell you which version is better. Moreover, when I want to minimize the traffic on the pipe, I use the CHARS TRNSFD column in order to know how is the traffic between the application and the sqlturbo.


8.9 Using ISQL/DBACCESS to optimize SELECTs

From rizzo@fourgee.demon.co.uk Sun Apr 2 06:38:28 1995

When you are using ISQL or DBACCESS to design/optimize your SELECTS, make sure when you run the select, you select OUTPUT/APPEND FILE /dev/null. Running ISQL/DBACCESS straight to the screen produces misleading times because ISQL/DBACCESS only selects a screenfull at a time.

I use OUTPUT/APPEND /dev/null since this produces the minimal amount of overhead (no writing to disk since it all dissapears down the plughole).


8.10 dbexported & imported - now runs like a dog!

From: Tom Hogarty (hogarty.pad@sni.de)

QUESTION:

After a full dbexport/dbimport Online's performance has degraded *significantly* - why? (Version 4.10.UE1)

ANSWER:

As an OnLine database grows, the number of extents will increase (fragmentation) and consequently the amount of time required by OnLine to access data increases. In later OnLine versions I think in 6.0+ this can be turned to some advantage (parallel queries) but in your version it just makes the system run sloooooowwww!!. The problem is exagerated when the system calalog tables (systables, syscolumns...) are effected.

The solution to fragmentation is to dbexport/dbimport the database and remember to control the <dbname>.sql file that dbexport produces so that your newly created/dbimported database has adequate next sizes defined for, in particular, the system catalog tables.

You can define where this control file goes by calling dbex/import using the -f <filname> option. If you're exporting to disk the file apears in your <dbname>.exp directory. Edit this file so that before any create table commands you...

       ALTER TABLE systables   MODIFY NEXT SIZE <nnn> ;
       ALTER TABLE syscolumns  MODIFY NEXT SIZE <nnn> ;
       ALTER TABLE systabauth...
Ascertain the required value for <nnn> by using the tbcheck -pT command for any of these sys... tables that appear in the tbcheck -ce report.
       tbcheck -ce ( reports on tables having more than 8 extents )
       tbcheck -pT <dbname>:<owner>.<tabname>  to get Tablespace usage.
Dbex/import, on your version, however have a drawback and that is that the full schema is not exported. What`s missing is locking information and extents information. This is easily overcome, in fact I have a script somewhere that automatically inserts this info into a <dbname>.sql file. I need to make it pretty and test it and then I will post it to the group. [see APPENDIX K Calculating extent sizes - Ed]

Running update statistics is always the first recommendation for improving performance, you could also check if any previous row level locking is intact, run tbcheck -ce and RTFM (the manuals for 4.x may not be brilliant but, in general the later ones are). At this point I should take the chance to recommend the excellent and concise...

Informix OnLine Performance Tuning by Elizabeth Suto Published by Prentice Hall ISBN 0-13-124322-5

8.11 How do I find which dbspace a database resides in?

Disclaimer: Of course, with fragmentation and CREATE TABLE...IN DBSPACE... statements this may not be completely true, but anyway:

Chuck Ludwigsen (cludwigs@hotmail.com) writes:

Try this query against the sysmaster database:

select b.dbsname, a.name
from sysdbspaces a, systabnames b
where a.dbsnum= partdbsnum(b.partnum)
and b.tabname="systables" and b.dbsname="yourdbname"

Just substitute for "yourdbname".

spal@scotch.den.csci.csc.com (Sujit Pal) writes:

On 7.x instances, try:

	SELECT DBINFO("DBSPACE", HEX(partnum))
	FROM systables
	WHERE tabname = "systables"

On 5.x instances, try:

	SELECT HEX(partnum) FROM systables
	WHERE tabname = "systables"

In the first 2 characters of the output (after the 0x) you will see the chunk number. Do a tbstat -d to get the dbspace name.


8.12 How do I find out if the network is causing performance problems?

Chao Y. Din (cdin@csc.com) writes:

We did experience a lot of performance problems in the past. Almost all of them turned out to be network problems. Either network hardware problem or network configuration problem. One of our performance problem was resolved by installing a second CPU on a Sparc 20 server. One way to CONFIRM your network problem is to enter the following two commands at the server side:

arp -a netstat -r

If you don't have spontaneous response, you MUST have network problem. It sounds simple but takes us a long time to learn the above RULE. When the result from arp -a pauses for awhile and then display an ip address, you can be sure that is the troublesome node (I do'nt mean that particular node itself has problems.) When netstat -r pauses, you may have router problem (we led 224.0.0.0 multicast setting in one of the startup scripts for performance reason). If you want you should also check out "snoop" command.

If you are serious about performance tuning, please listen to me this time: DOWNLOAD SymBEL FROM SUN WWW. It is free and suggests ways to improve server performance. You will never regret to have this tool.


8.13 What undocumented Online things are there?

djw@smooth1.demon.co.uk (David Williams) writes:

DISCLAIMER: THESE MAY WELL CRASH YOUR ENGINE AND LEAVE YOU WITH CORRUPTION DISCLAIMER: THESE MAY WELL CRASH YOUR ENGINE AND LEAVE YOU WITH CORRUPTION DISCLAIMER: THESE MAY WELL CRASH YOUR ENGINE AND LEAVE YOU WITH CORRUPTION

Got your attention!! These are not things to try unless you have a lot of experience and confidence with Online and can be sure you can restore your data if something fails...Don't say I didn't warn you..

PC_POOLSIZE - An ONCONFIG parameter which sets the size of the stored procedure cache which is otherwise not configurable. Warning: Changing it can result in "Assert Failed: Internal Error - Segmentation Violation."

PC_HASHSIZE - Related to PC_POOLSIZE and must be a prime number

On 13th September 2000 kagel@bloomberg.net (Art S. Kagel) wrote:-

PC_HASHSIZE is related to PC_POOLSIZE, the former is the number of hash buckets and the latter is the number of entries permitted in each bucket.

Similar entries for DS_HASHSIZE & DS_POOLSIZE should be added to control the size of the data distribution cache if needed.

End of Art's response

QSTATS - An ONCONFIG parameter which can be set to 1 to enable queue statistics. Used to enable onstat -g qst.

WSTATS - An ONCONFIG parameter which can be set to 1 to enable wait statistics. Used to enable onstat -g wst.

In Online 7.2x run onmode -i 208 to that when an application encounters error -208 Online will Panic and shutdown leaving any shared memory dumps you requested.

Run online with oninit -v to get more debug messages produced as online startsup.

Environment variables to set before starting Online:-

KAIOOFF=1 - Disable KAIO.

KAIOON=1 - Enables KAIO on some platforms.

ASFDEBUG=1 - Stop the engine from shutting down if it panics. Instead it 'freezes' and onstat can still be used for analysis.

AIOTRACE=1 - Writes additional message to online.log

jguzman@transre.com (Juan R. Guzman) writes:-

Another one is if you are using DR and want to see what's going on between servers set DR_TRACE=1 and pipe oninit to a file.

pbonting@worldaccess.nl (Paul Bonting) writes:-

charlie.muntz@vallley.net (Charlie Muntz) wrote:
<Online 7.1 on a SPARC/1000; Solaris 2.4
<Trying to create an index on an integer field in a fragmented table 
<(fragged on the int); If the table is approx 20K rows, engine wants about 
<50 TBLSPACES during index creation;
<If the table is 200K rows, the engine wants about 400 TBLSPACES;
<We need 15 million rows;
<Engine initializes with TBLSPACES set at 500, but not with TBLSPACES set at 
<600.  I have not tried any larger values.
<The manual says nothing about (temporary) need for lots of TBLSPACES when
<creating an index on a fragged table.
<Any suggestions?

W're testing Online 7.1 on a Stratus FTX 2.3 platform. The sizes of our tables range between 1.5 and 30 million rows.

We encountered the same problem in the case of a fragmented table and a similar problem with an unfragmented table (create index cannot allocate shared memory).

For both problems we use the following workaround. We set the undocumented environment-variable NOSORTINDEX to true. (export NOSORTINDEX=true)

Both problems (index creation on fragmented and unfragmented tables) have been reported to our Informix support contact. We were informed that both problems have informix bugnumbers.


kagel@bloomberg.net (Art S. Kagel) writes:

If you are still having buffer wait problems If you still have BR problems you can try increasing BUFFERS also and then you have to use the undocumented ONCONFIG parameter LRUPRIORITY to alter how clients select LRU queues to reduce contention.

My only instructions from Informix on using these is that their use MAY help relieve LRU and buffer contention on very busy systems with a number of active sessions that approximate the number of LRUs and MAY also help with large numbers of users. I was told to "PLAY" with them and see what happens based on the descriptions.

These values can be mathematically OR'd to create values in the range 0-31 which can control all or part of the LRU selection and wait policies. A value of 0x11 would cause each session to initially always select the same LRU queue which MAY eliminate LRU contention when the number of sessions is not significantly larger than LRUS. If sessions < LRUS it will in effect make each LRU private for a session or two. The even values (2,4,8) will case a session to remain in wait on the selected LRU and not rehash to try to find a less hotly contended one.

On 16th october 2000 aef@mfs.misys.co.uk (Tony Flaherty) wrote:-

I was told by an engineer from Informix a day or two ago that the SHMBASE 0x0 results in unreadable shared memory dumps for HPUX users as all of the internal pointers are screwed up in the dump file.

On 27th october 2000 bogdan.neagu@alcatel.com (Bogdan Neagu) wrote:-

Try onstat -g dis

8.14 How do I enable KAIO?

KAIO (Kernel Asyncronous I/O) is an Online 7.x feature and will be enabled 
on most platforms if:

a) Your platform supports it (check your release notes under
   $INFORMIXDIR/release somewhere)

b) You are using raw partitions.

On certain platforms you have to perform additional steps:-

HP-UX 

On 21st Dec 1997 ecstahl@aol.com (Eric Stahl) wrote:-

The release notes should cover...

1- Informix down
2- Install the special device driver:
sam; kernel cfg; drivers; select asyncdsk; actions; add driver to kernel;
create a new kernel; move kernel into place and continue shutdown [reboot]
now...
3- Create the async special device file:
mknod /dev/async c 101 1
4- Temporarily set the KAIO environment variable. It will need to be in a
startup script:
export KAIOON=1
5- Start Informix
6- Check for KAIO threads:
onstat -g ath

(Thanks to K.Gotthardt@em.uni-frankfurt.de (Klaus Gotthardt) for this) :-

AIX 4.1.5

  /usr/sbin/mkdev -l aio0

You have to do this every time the computer boots or simply place
an entry in the /etc/inittab file, like this:

kaio:2:wait:/usr/sbin/mkdev -l aio0

8.15 How do I do a warm restore of a dbspace using OnBar?

danny@uk.ibm.com (Daniel Williams) writes:-

Carlos (and anyone else who's intersted)

Here's the procedure for performing a warm restore of a DBSpace using ON-Bar. It works - I have used it extensively in testing.

Corrupt the raw or cooked file space (I used dd - note you have to use an input file to dd that is at least 10k in size to ensure you overwite enough of the raw space) - for testing only, obviously not in real life.
export ARCHIVE_TEST=true
onstat -d to determine the chunk number(s) that correspond to the DBSpace you wish to mark as down.
onmode -o 'chunk number'; - for each chunk that is to be marked as down.
onmode -O to override down dbspaces blocking checkpoints - answer YES to the message This will render any dbspaces which have incurred disabling IO errors unusable and require them to be restored from an archive. Do you wish to continue (y/n)?
onbar -r 'dbspace name';

8.16 Which queries are not parallelized?

ennis@ssax.com (Bill Ennis) writes:-

Wow, what a coincidence! I just read a page on this in the Performance Tuning Training manual.

Queries NOT parallelized:

8.17 How do I check my backups?

Informix has a program called archecker which is available upon request. It was put together by Advanced Support and will (hopefully) be made generally available.

kagel@bloomberg.com (Art S. Kagel) writes:-

Contact Informix's Advanced Technology Group, or have French sales do so. They can get it for you. The best solution, however, is to upgrade to versions 5.08 or 7.14 and higher. The bug, which apparently has existed for years in all earlier versions caused scattered pages to not be archived if there was sufficiently heavy update activity on the engine during the backup. This was finally fixed in 5.08+ and 7.14+.

For logical log tapes: On 11th Jun 1997 J.Clutterbuck wrote a logical log tape validator for Online 5.x. This is written in perl5 and available as APPENDIX P Logical Log Tape Validator for Online 5.x

8.18 How do I find out how much of my logical logs are in use?

On Dec 3 mdstock@informix.com (Mark D. Stock) wrote:

You can get this information from SMI as follows:

        SELECT  uniqid, (used/size*100)
        FROM    sysmaster:syslogs
        WHERE   uniqid >=       (
                                SELECT  MIN(tx_loguniq)
                                FROM    sysmaster:systrans
                                WHERE   tx_loguniq > 0
                                )
        UNION
        SELECT  uniqid, 0.00
        FROM    sysmaster:syslogs
        WHERE   uniqid <        (
                                SELECT  MIN(tx_loguniq)
                                FROM    sysmaster:systrans
                                WHERE   tx_loguniq > 0
                                )

8.19 Do I need a storage manager to run OnBar?

On 11th Nov 1998 jayallen@auragen.com (Jay Allen) wrote:-

I just double-checked this in INFORMIX UNLEASHED and it says that the NT version of OWS (or whatever the hell it's called now) comes bundled with a storage manager. onbar on NT uses this storage manager as its default XBSA program.

On 8th Jan 1999 djw@smooth1.demon.co.uk (David Williams) wrote:-

Note Online 7.30.UC5 comes with ISM (Informix Storage Manager) which can handle simple tape devices (i.e. not jukeboxs).

On 27th Feb 1998 clem@informix.com (Clem Akins) wrote :-

The Informix Dynamic Server v 7.3 (no longer called OnLine) will have a version of Legato's NetWorker Server product bundled. This product has an interface called Informix Storage Manager (ISM) that works either via a GUI or a command line.

ISM comes with several restrictions, designed to keep it from competing with the Legato full product. Among them:

From the experience I've had with ISM (not extensive) the product looks like a really good archive system. It is easy to use and well suited for installations that want an out-of-the-box solution. Applications that are more sophisticated will require a full-featured suite of products, such as Legato or OmniBack as well as the hardware to support them.

If you already have Legato installed, then *do not* install the ISM. It is a subset of Legato, and will overwrite some important configuration and backup history files. ISM contains an interface onto the Legato product and still uses onbar for the actual tape management, via the XBSA interface.

On 27th Feb 1998 dbaresrc@xmission.xmission.com (Carlton Doe) wrote :-

I have been trying to use Veritas' NetBackup and it has been a complete failure. Veritas has been trying to figure out why but . . . . .

In the 7.3 releases (NT and Unix), Legato Light will be bundled and called the Informix Storage Manager (ISM). Legato, as a product, has the highest amount of integration with Informix, their light product not withstanding. Following behind is ADSM and OmniBack.

Informix also does not "certify" OnBar products which may be why Veritas' product is not working correctly. They only "certify" that the data delivered to the XBSA layer is as it should be and that they (Informix) correctly processes the required return messages from the Storage Management product.

What the SM product does once it receives the data is out of Informix's hands. They can/do not test the SM product as they (Informix) does not have the SM's source code. In other words Buyer Beware.

You need to test like crazy including the basic stuff like "onbar -bw; oninit -i; onbar -rw" and see if the instance comes back.

8.20 How do I track which sessions are doing the most I/O?

On 22nd Dec 1997 jeffl@etcscan.com (Jeff Lanham) wrote:-

I got this one from Informix Support. It's slightly more helpful.


      SELECT p.sid, username, tty, seqscans, dsksorts, total_sorts from
      syssesprof p, syssessions s WHERE p.sid = s.sid;

8.21 How do setup the High Performance Loader (HPL)?

On 23rd Jan 1998 lester@advancedatatools.com (Lester Knutsen) wrote:-

A couple of things, you must run ipload on a x-windows system, and have the DISPLAY envirmoment variable set. Also if you are running it as another user then the one you logged in as we need to issue the "xhost + " command to allow access to the display. (e.g logging in as lester, then su to informix and try to run ipload will fail unles I have executed the xhost command)

8.22 How do access NT databases from UNIX?

On 13th Oct 1998 helmut.leininger@bull.de (Helmut Leininger) wrote:-

Between Unix systems I would set up .rhosts or hosts.equiv. But how can I do something equivalent on NT systems?

I found the solution myself. You have to put a hosts.equiv file into c:\winnt\system32\drivers\etc.

8.23 How do use SMI tables to locate logical logs?

On 19th Feb 1998 aroustv@towers.com (Vardan Aroustamian) wrote:-

 
select  number, uniqid, physchunk(physloc) physloc, chknum, name
from syslogfil l, syschktab c, sysdbstab d
where    physchunk(physloc) = c.chknum and
        c.dbsnum = d.dbsnum
order by 1;

It's working in OnLine 7.2x

8.24 How do use SMI tables to find table space usage?

On 3rd Mar 1998 Martin.Berns@Materna.De (Martin Berns) wrote:-

the following select (against sysmaster) should do what you want:

select s.name dbspace, n.dbsname database,
sum(ti_nptotal) total,  sum(ti_npused) used,
sum(ti_npdata) data, sum(ti_npused) - sum(ti_npdata) idx
 from systabinfo i,systabnames n, sysdbspaces s
  where i.ti_partnum = n.partnum and partdbsnum(i.ti_partnum)=s.dbsnum 
group by 1,2
order by 1,2

8.25 Why does oncheck warn me about more than 8 extents?

On 10th Mar 1998 satriguy@aol.com (SaTriGuy) wrote:-

If you are on anything since 7.11, I wouldn't worry too much about the "less than 8 extent" rule. In version 5, we used an extent table in memory for each tablespace which was only 8 entries large. For any extent past the eighth, we had to examine the systables table for that database directly. This is why we made such a big deal about the "eight extents".

With 7.11+, we dynamically allocate memory so that all of the extents are in memory. Thus the "less than extents" rule is not nearly so important.

However, if you are really going to have this many tables on a 7.1x or 7.2x system, you might want to increase your in memory dictionary somewhat. This is managed by the DD_HASHSIZE and DD_HASHMAX onconfig variables. These are undocumented so you will need to contact tech support to get information on how to use them.

8.26 Why does the High Performance Loader give no errors?

On 20th Jan 1998 jparker@epsilon.com (Jack Parker) wrote:-

Check the violations tables. If you did not name them specifically they will have a name of table_vio and table_dia in your database - these should have the rows (in the vio table) and the reason (in the dia table). You can join the two tables using the tuple_id and look up the reason from the HPL manual.

8.27 Why does fragment elimination using dates fail?

On 1st May 1998 richard_thomas@yes.optus.com.au (Richard Thomas) wrote:-

There is a limitation referred to in TFM that warns against using 2-digit century in fragment expressions - and I realise you're not doing that. But the manual (IMHO) does not go far enough to warn you that the fragment expression seems to be stored with the $DBDATE that's effective at the time of the ALTER FRAGMENT execution. Subsequent queries that use a different $DBDATE run into all sorts of problems.

We have a number of fragmentation strategies that rely on date-ranges (eg 5 days to a DBspace etc) and used to have all manner of grief with performance and elimination etc.

I found the solution to this is to manipulate the fragment expressions using the Informix internal representation of the date (ie 1998-05-01 35915). Whilst it reduces the 'readability' of the dbschema and sysfragments info, all our fragment elimination problems immediately disappeared.

So try:

FRAGMENT BY EXPRESSION
> inv_dte = 35519 IN dbs01
> inv_dte = 35526 IN dbs02

as a work-around and see how you go.

A simple way to perform the translation from DATE to INTEGER is to do this

SELECT date_field, TRUNC(DATE(date_field)) AS int_value
  FROM ...

8.28 When dbimporting why do I get string to date errors?

On 5th July 1998 ccremid@saadev.saa.noaa.gov (Tino) wrote:-

Check the DBDATE environment variable used at the time of dbexport and see if its value matches the one that you are using at the time of dbimport.

8.29 How can I speed up sorts and index builds?

On 11th May 1998 kagel@bloomberg.com (Art S. Kagel) wrote:-

You have stumbled on a little known fact. Sorting is faster to filesystem space than to temp table space. And since you do not care about the safety of those sort-work files that's OK.

Even better still, for large sorts like index builds and UPDATE STATISTICS, set the environment variable PSORT_DBTEMP to as many different filesystems as possible (at least 3, keeping in mind that the smallest will limit the size of the sort that is possible). Also, contrary to the documentation, setting PSORT_NPROCS to a value between 20 and 40 will also speed the sorting as long as you can afford to allocate so many resources to that one task. You might see the time drop to 15 or 20 minutes.

8.30 How can I handle duplicates when loading data?

On 14th May 1998 rajam@worldnet.att.net (Idiot) wrote:-

You did not mention, what version of Informix you are using. Assuming you are using 7.22. Do the following,


SET INDEXES idx_name DISABLED;
START VIOLATIONS TABLE FOR tab_name;
SET INDEXES idx_name ENABLED even better, SET INDEXES idx_name FILTERING
WITHOUT ERROR
load data

All the duplicate records will go to the violations and diagnostic table. You can debug the violations table to correct the offending records and load them into the main table.

You will have to have:

DBA privilege or

Owner of the target table and have resource privilege OR

Alter privilege on the target table and have resource privilege.

Read the documentation for START VIOLATIONS TABLE in the syntax guide.

8.31 How do I dbexport when CDR is enabled?

On 20th May 1998 sch@pdx.informix.com (Stephen Hemminger) wrote:-

Some of this is fixed in 7.3.

The trick is to bring server up with the environment variable CDRBLOCKOUT set (any value will do). Then do the dbexport, then restart server without environment set.


example:
% onmode -yk			# shutdown server
% CDRBLOCKOUT=on oninit -iy	# start server
% dbexport
% onmode -yk
% oninit -iy

Note: any transactions that happen while server is running with CDRBLOCKOUT will not be replicated!

8.32 Why can the DBA not grant permissions on this view?

On 23rd Jun 1998 davek@summitdata.com (David Kosenko) wrote:-

Peter Lancashire offerred:


+No doubt this is simple but I can't see it.
+
+The following was all done as the DBA user.
....
+I granted privileges on all tables in the database like this:
+grant select on  to query with grant option;
+
+Then I created a view for user "query" as below. The view includes
+several tables and an expression. The view works OK.
+create view query.ptreatments (...) as select ...;
+
+Then I attempted this and it failed:
+grant select on query.ptreatments to ukkiy;
+#                                       ^
+#  302: No GRANT option or illegal option on multi-table view.
+#
+
+User ukkiy has select privilege on all the tables used by the view,
+although I do not think that is relevant.

You, even as DBA, do not have GRANT privs on the view - you gave them away when you created the view as owned by user query. Run the GRANT as user query, and it should work ok. While you are at it, as user query GRANT ALL ON ptreatments TO DBA;

8.33 Can I run ontape to a disk file?

On 29th Oct 1998 clem@informix.com (Clem Akins) wrote:-

Ontape to a file *IS* supported by Informix. It has been for a while now. You can quote June Tong's internal TechInfo (not the one available to customers via the web site, but an internal one) entry #6125 to any Informix Technical Support engineer who still thinks otherwise. (Thanks, June!)

However, the ontape program expects a tape drive, and behaves accordingly. It is *your* responsibility to do all the things necessary to satisfy the program's expectations and requirements. (Things like simulating an operator pressing return, switching files when the tape size limit is reached, handling rewind device expectations, log file output and overflow, etc.) You can find some example shell scripts which perform these functions at the web site of the International Informix Users Group at http://www.iiug.org//

The question of how large the output file can become without an error is highly dependent on O/S version as well as the version of ontape. In Informix Dynamic Server versions <7.2 there was a definite 2GB limit on the filesize that ontape would support. In versions >=7.2 the limit comes from the O/S, and may still be 2GB. It is not too difficult to write a program that would shuffle disk files once they reach the maximum size, convincing ontape that an operator is changing tapes. Of course, the challenge comes in labeling and saving these files well enough to supply them to ontape in the right order at restore time.

As with any critical system, you should *thoroughly* test the archive and restore functions, including your shop's procedures for saving disk layout information, documenting the process well enough for someone besides yourself to handle a disaster, and ensuring that management understands that you have done your job as a DBA and deserve a raise. (The flip side is that if you haven't, and disaster strikes, you'll be looking for a job.)

8.34 How can I list all tables in a dbspace?

On 19th Oct 1998 tschaefe@mindspring.com (Tim Schaefer) wrote:-

This is one I use for XPS. Your challenge should you decide to accept is to remove the dbslice layer of the problem, or simply use it with XPS as it is. I know your intent is probably for 7.x, but I present this not just for you, but for others out there who may be using XPS. To be sure, this solution serves only a minority of you out there. But the future is coming, especially now with our new friends from RedBrick.

XPS works at one extra layer beyond that which exists for the 7.x engine. DBslices are logical groupings of dbspaces across nodes.


Dbslice
   +
   +-dbspace
   +-dbspace
         +-table
         +-table
   +-dbspace
Dbslice
   +
   +-dbspace
         +-table
         +-table
   +-dbspace
         +-table
         +-table
   +-dbspace
   +-dbspace

I would challenge Informix to show table level information like this in the IECC for XPS. Many of you out there don't realize this, but there are no less than 3 IECC programs, probably more. One for 7.x, one for XPS that points to UNIX, and one for XPS that works only with NT. :-)

The code presented would allow a DBA the total picture, not stopping like it currently does at the dbspace. Some of the most important priorities a DBA has are in understanding where things are, how much space is available, and how much space is used. Currently only slices and spaces are shown in the IECC, but table information is also necessary.



# BEGIN

#!/bin/sh
################################################################################
# begin doc
#
#     Program: XDBtree
#
#      Author: Tim Schaefer
#              Data Design Technologies, Inc.
#              www.datad.com
#
#       Login: tschaefe@mindspring.com
#
#     Created: May 1998	
#
# Description: XDBtree reports on tables in dbspaces.
#              The report is based on your ONCONFIG setting.  
#
#       Usage: XDBtree
#
# end doc
################################################################################
# sysdbslices
# 
# Column name          Type                                    Nulls
# dbslice_num          smallint                                yes
# name                 char(18)                                yes
# ndbspaces            smallint                                yes
# is_rootslice         integer                                 yes
# is_mirrored          integer                                 yes
# is_blobslice         integer                                 yes
# is_temp              integer                                 yes
#
################################################################################
# syscmdbspaces
# 
# Column name          Type                                    Nulls
# 
# dbsnum               smallint                                yes
# name                 char(18)                                yes
# fchunk               smallint                                yes
# nchunks              smallint                                yes
# home_cosvr           smallint                                yes
# current_cosvr        smallint                                yes
# dbslice_num          smallint                                yes
# dbslice_ordinal      smallint                                yes
# is_root              integer                                 yes
# is_mirrored          integer                                 yes
# is_blobspace         integer                                 yes
# is_temp              integer                                 yes
#
################################################################################

get_db_info()
{
date

dbaccess sysmaster  2>/dev/null <<+

set isolation to dirty read;

unload to /tmp/systree.dat
select
       sysdbslices.name,
       syscmdbspaces.name,
       syscmdbspaces.dbslice_num,
       syscmdbspaces.dbsnum,
       syscmdbspaces.fchunk,
       sysextents.dbsname  ,
       sysextents.tabname  ,
       sysextents.start_chunk   ,
       sysextents.start_offset  ,
       sysextents.size
  from syscmdbspaces, sysdbslices, sysextents
 where sysdbslices.dbslice_num = syscmdbspaces.dbslice_num
   and sysextents.start_chunk  = syscmdbspaces.fchunk
 order by 
       sysdbslices.name, 
       syscmdbspaces.name,
       sysextents.start_chunk,
       sysextents.start_offset,
       sysextents.tabname 
+
}
################################################################################

produce_rpt()
{

awk -F"|" ' BEGIN {
      dbslice_name=""  ;
      dbspace_name=""  ;
       dbslice_num=""  ;
       dbspace_num=""  ;
            fchunk=""  ;
           dbsname=""  ;
           tabname=""  ;
       start_chunk=""  ;
      start_offset=""  ;
              size=""  ;
     ldbslice_name=""  ;
     ldbspace_name=""  ;
      ldbslice_num=""  ;
      ldbspace_num=""  ;
           lfchunk=""  ;
          ldbsname=""  ;
          ltabname=""  ;
      lstart_chunk=""  ;
     lstart_offset=""  ;
             lsize=""  ;
         size_cntr=0   ;
                  }
{
      dbslice_name=$1  ;
      dbspace_name=$2  ;
       dbslice_num=$3  ;
       dbspace_num=$4  ;
            fchunk=$5  ;
           dbsname=$6  ;
           tabname=$7  ;
       start_chunk=$8  ;
      start_offset=$9  ;
              size=$10 ;

{ if ( tabname       == "TBLSpace" )   { { tabname = "" } } }
{ if ( ldbslice_num  == dbslice_num )  { { dbslice_num = "" } } }
{ if ( ldbslice_name == dbslice_name ) { { dbslice_name = "" } } }
{ if ( ldbspace_num  == dbspace_num )  { { dbspace_num = "" } } }
{ if ( lstart_chunk  == start_chunk )  { { start_chunk = "" } } }
{ if ( ldbspace_name == dbspace_name ) { { dbspace_name = "" } } }
{ if ( dbspace_name  == dbsname )      { { dbsname = "" } } }
{ if ( ldbsname      == dbsname )      { { dbname = "" } } }
{ if ( ltabname      == tabname )      { { tabame = "" } } }

{ printf( "%3s %-18s %3s %3s %-18s %-18s %-18s %10s %10s\n", dbslice_num, dbslice_name, dbspace_num, start_chunk, dbspace_name, dbsname, tabname, start_offset, size  ) }
# { printf( "%3s %-18s %3s %3s %s %s %-20s %10s %10s\n", dbslice_num, dbslice_name, dbspace_num, start_chunk, dbspace_name, dbsname, tabname, start_offset, size  ) }

          last_chk=$1 ;
     ldbslice_name=$1 ;
     ldbspace_name=$2 ;
      ldbslice_num=$3 ;
      ldbspace_num=$4 ;
           lfchunk=$5 ;
          ldbsname=$6 ;
          ltabname=$7 ;
      lstart_chunk=$8 ;
     lstart_offset=$9 ;
             lsize=$10;

}

' /tmp/systree.dat
}

################################################################################

>/tmp/systree.dat
get_db_info
produce_rpt
# >/tmp/systree.dat

# END

8.35 Is there anything faster than dbexport?

On 6th May 1998 richard_thomas@yes.optus.com.au (Richard Thomas) wrote:-

I'm presuming that ontape/onbar etc aren't going to do it for you. There is an alternative, but it requires a little bit of work on your behalf. I hate dbexport - if your database is bigger than about 15GB, it is hopelessly inefficient. It also doesn't let you move tables around and play with extent sizes etc prior to execution (in v5 you couldn't even use -ss). Here's what I did when we migrated from v5 to v7 a while back:

Generate a full dbschema and edit it to suit your new DBspaces, extents etc. Ensure logging is turned off and run it on your new instance. Then on the old instance, execute the following:


UNLOAD TO "table.data" DELIMITER "|"
SELECT tabname, ncols, ncols * nrows
  FROM systables
  WHERE tabid > 99
ORDER BY 3 DESC;

The order by is to get the tables in rough order from largest to smallest. Then use awk or perl to read this file and generate 4 unload scripts in a round-robin fashion, where each one unloads to a different disk. Depending on what you're trying to achieve, you could possibly use onunload instead. Something like this:


perl -e '
  open(Unld0, "> unload_1.sql") || die;
  open(Unld1, "> unload_2.sql") || die;
  open(Unld2, "> unload_3.sql") || die;
  open(Unld3, "> unload_4.sql") || die;
  open(Dbld0, "> unload_1.cmd") || die;
  open(Dbld1, "> unload_2.cmd") || die;
  open(Dbld2, "> unload_3.cmd") || die;
  open(Dbld3, "> unload_4.cmd") || die;
  @dsk =3D ("/disk_a","/disk_b","/disk_c","/disk_d");  #array of disks =
for writes
  while(<>){
    ($tab, $cols) =3D split(/\|/);
    $r=3D$n++%4;   #modulo to cycle through scripts: $n =3D Row number
    $uno=3Dsprintf("Unld%d", $r);
    $dno=3Dsprintf("Dbld%d", $r);
    printf $uno "UNLOAD TO %s/%s.unl DELIMITER \"|\"\n", $dsk[$r], $tab;
    printf $uno "  SELECT * FROM %s;\n\n",$tab;

    printf $dno "FILE %s/%s.unl DELIMITER \"|\" %d;\n", $dsk[$r], $tab, =
$cols;
    printf $dno "INSERT INTO %s;\n\n", $tab;
  }' table.data

Then execute the four unload_*.sql scripts in parallel. You can run as many of these as you have disks/capacity. I chose to use four.

As they finish, kick off the corresponding dbload script. One of the other benefits of this method over dbimport is that you don't have to start from scratch if you have a problem. You can fairly easily restart the dbloads from virtually any point.

Actually, when I did this I had one dbload script per table, and three processes polling for completed unloads. As they were identified, the corresponding dbload would be executed. Using this method I moved over 20GB of data in about 10 hours, including rebuilding indexes, statistics etc. The previously attempted dbexport was killed after 56 hours, 'cos we were running out of outage and were nowhere near complete!

8.36 How do I setup OnBar and Legato?

On 15th Oct 1998 jmiller@informix.com (John F. Miller III) wrote:-

Below is a write-up my co-worker did on some quick and usefull steps in getting onbar/HDR/Legato running. I hope they help.

APPENDIX Q OnBar setup with Legato

8.37 How do I get information about tables from sysmaster?

Resident tables

On 13th Jan 1999 Vardan.Aroustamian@chase.com (Vardan Aroustamian) wrote:-

Actually it is last column in onstat -t (flag 2000) You can get that information also from sysmasters

select tabname
from sysptntab p, systabnames n
where p.partnum = n.partnum
      and trunc(flags/8192) = 1;

8.38 How do I use Oncockpit?

On 18th Jan 1999 dmeyer9@email.msn.com (David Meyer) wrote:-

It is good to see people trying to use this extraordinary Informix tool. I use the ALARM functions to notify beginning dba's of critical conditions within the database. Besides being a GUI that allows you view all Informix parameters and operating functions.... it is just a hot product!

Now, for your question, I am not certain where you are in the steps to run oncockpit so I will list a few steps for you to check, I hope this helps... have fun! BTW - a good book that covers all this and more is Carlton Doe's - Informix Online Dynamic Server Handbook.

1) oncockpit is a GUI client-server application. 'onprobe', the server program must be running and communicating with the instance. onprobe is defined as a 'service' with it's own instance alias and corresponding entries in the $SQLHOSTS and /etc/services files.

2) Since 'oncockpit' is a GUI, your DISPLAY environment must be set to your IP address or host name of the machine that the GUI will display to:

3) onprobe and oncockpit have their own command line arguments. I think that you have to have root permissions to run onprobe but I am not sure. An example of how to start onprobe and oncockpit:

$INFORMIXSERVER=shmcci;export INFORMIXSERVER;
onprobe -service cpit_cci -log /home/informix/onprobe_djm.log -severity
severity.djm &

$INFORMIXSERVER=xxxcci;export INFORMIXSERVER;
oncockpit -service cpit_cci -log /home/informix/oncockpit_djm.log -fg
red -bg gray &

4) check the logs that are created to debug any problems runnning the client or server applications. You should really perform a 'ps -eaf |grep onprobe' to verify that the program is running before you execute oncockpit.... remember to check the oncockpitand onprobe logs.

8.39 Where do sqexplain.out's appear under NT?

On 3rd Feb 1999 psilva@informix.com (Paulo Silva) wrote:-

After installing IDS on NT, you should notice a new Shared Folder calledSQEXPLN, under your %INFORMIXDIR% (usually c:\informix\sqexpln).

This folder keeps all outputs for all users, in the form %USERNAME%.out, instead of the usual sqexplain.out

8.40 How do I use locales under Informix?

On 6th Feb 1999 gunstho@uni-muenster.de (Dirk Gunsthoevel) wrote:-

You have to set db_locale to your locale (I assume something like FR_fr.1252 for you) BEFORE creating the database.

If you are still using isql dont create the database in it. It will NOT use the db_locale setting. Use dbaccess instead.

8.41 How do I use optimize index builds?

On 23rd Mar 1999 dua1@my-dejanews.com (Juri Dovgart) wrote:-

PDQ has impact on the index builds - when PDQPRIORITY > 0. It's called vertical parallelism. Infx perform parallel scans, sorts and uses memory, allocated for PDQ for sorts.

Here some advices about index builds :

8.42 How do I calculate how much space an index will use?

On 6th May 1999 icc@injersey.infi.net (Vic Glass) wrote:-

Formula to calculate ~ bytes that an index will use:

(sum of column sizes + 9) * rows * 1.25

For example, if there is a table:

>PRE> create table person ( person_id serial, fname char(25), lname char(25), ssn char(char9) ); create index ix1 on person(person_id); create index ix2 on person(ssn); create index ix3 on (fname, lname);

then the index space needed for 1,000,000 rows would be approximately:

ix1: (4 +9) * 1000000 * 1.25 ----> 16,250,000 bytes or 16,250 Kb
ix2: (9+9) * 1000000 * 1.25 -----> 22,500,000 bytes or 22,500 Kb
ix3: (25+25+9) * 1000000 * 1.25 ---> 73,750,000 bytes or 73,750 Kb

On 6th May 1999 mcollins@us.dhl.com (Mark Collins) wrote:-

Slight correction to the earlier answer - the formula listed is for attached indexes. If you put the indexes in their own dbspace, they are detached, and the formula changes to (sum of column sizes + 13) * rows * 1.25. The extra four bytes store the partition number of the table (or fragment) in which the indexed row is found. I'm assuming that the "1.25" is a general rule-of-thumb for estimating the overhead of non-leaf pages, but that method is only a rough approximation at best. Another thing that needs to be addressed when calculating index space is FILLFACTOR.

8.43 Why do I get error -197?

On 23rd June 1999 gdewinter@spf.fairchildsemi.com (Greg Dewinter) wrote:-

-197 ISAM error: Partition recently appended to; can't open for write or logging.

This error is generally seen after a High Performance Load in Express mode. The only way to correct this is to do a level 0 archive of the effectted dbspaces.

Express mode HPL loads data into a new extent and then appends the extent to the table when the entire load is complete. That is why it is able to load all the rows with no logging.

8.44 How do I log the queries informix receives?

On 28th June 1999 jleffler@earthlink.net (Jonathan Leffler) wrote:-

If you're on a civilized system (Unix or variants), and you're using a sufficiently civilized server (OnLine, IDS, etc) and a sufficiently recent version of ESQL/C (primarily CSDK 2.x), then you can set the SQLIDEBUG environment variable to a value such as 2:/tmp/sqli.out and the application will log all the data sent back and forth. You can then decipher the data with the sqliprint program. The actual data file will have an underscore and a process id after what you specified in SQLIDEBUG. The SQLIDEBUG environment variable has worked for quite some time (6.00?), but getting hold of sqliprint has been harder until it was distributed with CSDK. You can also use a value 1:/tmp/sqli.out to get some sort of ASCII dump, but it isn't as useful as the binary dump (IIRC; I've not used it more than once).

8.45 Any hints for running more than one onbar process at a time?

On 16th July 1999 RRABE@PROMUS.com (Rick Rabe) wrote:-

I'm not familiar with Netbackup, but use Networker on Siemens-Pyramid. My BAR_MAX_BACKUP is set to 8. One thing I have found helpful is altering bar_action, bar_instance, bar_object in the sysutils database to row-level locking instead of page-level locking.

8.46 How can I use Legato with onbar?

On 9th July 1999 sanformix@hotmail.com (Santanu Das) wrote:-

The following steps may be used to setup Legato storage manager software. For more info you may refer to Legato Installation guide.

The following is brief overview of the steps required to configure Legato for use with ON-Bar.

If you want to execute any onbar comands from the command line, Legato requires that the two environment variable, NSR_DATA_VOLUME_POOL and NSR_LOG_VOLUME_POOL, must be set to the same values specified in the client panel. If you follow the installation manual instructions, these pools will be named, DBMIData and DBMILog.

8.47 How can I update statistics in parallel?

On 28th July 1999 kagel@bloomberg.net (Art S. Kagel) wrote:-

Get my dostats.ec utility from the IIUG Software Repository and run 5-10 copies, on different tables, at once with PDQPRIORITY=100/(#copies). Here is an ksh/awk script to create the script to run all the needed dostats in parallel groups:


#! /usr/bin/ksh

if [[ $# -lt 2 ]]; then
    echo Usage: $0 database #copies 
    exit 1
fi

dbase=$1
ncopies=$2
if [[ $# -eq 3 ]]; then
    templ=$3
else
    templ='*'
fi
dbaccess $dbase - </dev/null
output to temp$$ without headings
select tabname 
from systables
where tabid > 99
  and tabname matches "$templ";
EOF

awk -v ncopies=$ncopies -v dbase=$dbase '
BEGIN { 
    cnt=0; 
    pdq=100/ncopies; 
    printf "PDQPRIORITY=%d; export PDQPRIORITY \n", pdq; 
}
{
    if (length( $1 ) == 0){ next; }
}
{
    # Every N copies of dostats insert a wait
    if ((cnt % ncopies) == 0 && cnt > 0) { print "wait"; }

    # output a dostats command for each table in the background
    printf "dostats -d %s -t %s & \n", dbase, $1, pdq;
    cnt++;
}
END {
    # Now update stats on all stored procedures.
    print "dostats -d %s -p \n", dbase;
}
' temp$$

#####  End script  #####

Then to generate a multiple dostats script, assume you named the above genstats: genstats mydatabase 5 >updstats.sh

8.48 How do I locate temporary tables?

On 16th August 1999 vardana@infogain.com (Vardan Aroustamian) wrote:-

I used to use some variations of this query to check temporary tables:


select tabname,
       case
       when bitval( p.flags, 32 ) = 1
         then 'sys_temp'
       when bitval( p.flags, 64 ) = 1
         then 'usr_temp'
       when bitval( p.flags, 128 ) = 1
         then 'sort_file'
       end type,
       hex(n.partnum) h_n_partnum,
       n.partnum n_partnum,
--     n.owner,
--     hex(p.flags) h_p_flags,
       name dbspace_name
from sysptnhdr p,
     systabnames n,
     sysdbstab d
where p.partnum = n.partnum
      and partdbsnum( n.partnum ) = d.dbsnum
      and ( bitval( p.flags, 32 ) = 1     -- System created Temp Table
          or bitval( p.flags, 64 ) = 1  -- User created Temp Table
          or bitval( p.flags, 128 ) = 1 )       -- Sort File
          );

On 17th August 1999 jakesalomon@my-deja.com (Jacob Salomon) wrote:-

After receiving your reply I got to a little experimenting. The pattern I noticed is the the 0x20 flag - bitval(p.flags, 32) - is the marker of any kind of temp table. I noticed that SORTTEMP and HASHTEMP tables have some other flags set but all of them has this one flag on.

Now my query for abuses of temp tables is:


select	t.dbsname, t.tabname,
	hex(p.partnum) partition, hex(p.flags) pflags
  from	sysmaster:systabnames t, sysmaster:sysptnhdr p
 where	t.partnum = p.partnum
   and	bitval(p.flags,32) = 1		-- Looking for temp tables
   and	trunc(p.partnum / 1048576)      -- Filter: Only temps not in
	in (select dbsnum               -- temp dbspace
              from sysdbspaces where is_temp = 0)
 order by dbsname, tabname, partition

8.49 How do I set tables memory resident?

On 17th August 1999 vardana@infogain.com (Vardan Aroustamian) wrote:-


SET TABLE your_table MEMORY_RESIDENT;
SET TABLE your_table NON_RESIDENT;

You can set resident only particular fragment of fragmented table


SET TABLE your_table ( dbspace1, dbspace2 ) MEMORY_RESIDENT;

8.50 How do I find column dependencies for a Stored Procedure?

On 19th August 1999 Sujit.Pal@bankofamerica.com (Sujit Pal) wrote:-


#!/usr/local/bin/perl
#
# Routine to output a complete list of table and column dependencies
# for a stored procedure.
#
if ($#ARGV != 0)
{
   die "Usage: ", $0, " database_name\n";
}
$dbname = $ARGV[0];
print "Table and Column Dependencies for Stored Procedures in Database ",
   $dbname, "\n\n";

#
# Find all table names
#
@tabnames = `dbaccess $dbname - 2>/dev/null < 99
   ORDER BY tabname
EOF
`;
#
# Find all unique column names
#
@colnames = `dbaccess $dbname - 2>/dev/null < 99
   ORDER BY colname
EOF
`;

#
# Find all stored procedure names
#
@procnames = `dbaccess $dbname - 2>/dev/null < 0)
         {
            if (index($taboutput{$procname}, $tabname) <= -1)
            {
               $taboutput{$procname} .= $tabname;
               $taboutput{$procname} .= " ";
               last;
            }
         }
      }
      #
      # Find column dependencies
      #
      for ($j = 4; $j <= $#colnames; $j++)
      {
         $colnames[$j] =~ s/ //g;
         $colname = lc($colnames[$j]);
         chop($colname);
         if (index($procline, $colname) > 0)
         {
            if (index($coloutput{$procname}, $colname) <= -1)
            {
               $coloutput{$procname} .= $colname;
               $coloutput{$procname} .= " ";
               last;
            }
         }
      }
   }
}
#
# Print the report
#
print "Table Dependencies\n";
print "------------------\n";
#foreach $keys (sort {$taboutput{$a} <=> $taboutput{$b}} keys %taboutput))
foreach $keys (sort(keys %taboutput))
{
   print $keys, ": ", $taboutput{$keys}, "\n";
}
print "\nColumn Dependencies\n";
print "------------------\n";
foreach $keys (sort(keys %coloutput))
{
   print $keys, ": ", $coloutput{$keys}, "\n";
}

8.51 How do I performance tune Onbar?

On 28th August 1999 rbernste@alarismed.com (Bernstein, Rick) wrote:-

The following website contain tools from Maury Tiller for troubleshooting and tuning onbar performance. By following his step-by-step procedure you can determine where the bottleneck lies.

It also contains programs which he mentioned during a technical session at the Informix Solutions Portal 99.

http://www.informix.com/onbartools/index.html

Another unofficial website

http://www.geocities.com/Heartland/Acres/4927/onbar.html

provides an excellent overview of how onbar functions.

8.52 Why do onspaces and other on-utilitys core dump?

On 31st October 1999 mdstock@mydas.freeserve.co.uk (Mark D. Stock) wrote:-

The onspaces (and probably onmonitor, but I don't use that :) command requires a user stack size of at least 8 Mb or thereabouts. You can either up the kernel parameter for all users, or use ulimit to set it at the session level.

8.53 How should I finish a restore (e.g. ontape -r)?

On 16th August 2000 djw@smooth1.demon.co.uk (David Williams) wrote:-

Don't forget to use onmode -m to bring the engine fully online before you next shutdown the engine otherwise your chunks will still be marked as bad!

8.54 What are Codd's rules?

On 12th September 2000 djw@smooth1.demon.co.uk (David Williams) wrote:-

I found this on the net and thought it might be useful!

A relational DBMS must use its relational facilities exclusively to manage and interact with the database. The rules:

These rules were defined by Codd in a paper published in 1985. They specify what a relational database must support in order to be relational.

1. Information rule Data are represented only one way: as values within columns within rows. Simple, consistent and versatile. The basic requirement of the relational model.
2. Guaranteed access rule Every value can be accessed by providing table name, column name and key. All data are uniquely identified and accessible via this identity.
3. Systematic treatment of null values Separate handling of missing and/or non applicable data. This is distinct to zero or empty strings Codd would further like several types of null to be handled.
4. Relational online catalog Catalog (data dictionary) can be queried by authorized users as part of the database. The catalog is part of the database.
5. Comprehensive data sublanguage Used interactively and embedded within programs Supports data definition, data manipulation, security, integrity constraints and transaction processing Today means: must support SQL.
6. View updating rule All theoretically possible view updates should be possible. Views are virtual tables. They appear to behave as conventional tables except that they are built dynamically when the query is run. This means that a view is always up to date. It is not always theoretically possible to update views. Codd himself, did not completely understand this. One problem exists when a view relates to part of a table not including a candidate key. This means that potential updates would violate the entity integrity rule.
7. High-level insert, update and delete Must support set-at-a-time updates. eg: UPDATE mytable SET mycol = value WHERE condition; Many rows may be updated with this single statement.
8. Physical data independence Physical layer of the architecture is mapped onto the logical layer. Users and programs are not dependent on the physical structure of the database. (Physical layer implementation is dependent on the DBMS.)
9. Logical data independence Users and programs are independent of the logical structure of the database. i.e.: the logical structure of the data can evolve with minimal impact on the programs.
10. Integrity independence Integrity constraints are to be stored in the catalog not the programs. Alterations to integrity constraints should not affect application programs. This simplifies the programs. It is not always possible to do this.
11. Distribution independence Applications should still work in a distributed database (DDB).
12. Nonsubversion rule If there is a record-at-a-time interface (eg via 3GL), security and integrity of the database must not be violated. There should be no backdoor to bypass the security imposed by the DBMS.
Rule Zero for RDBMS: Many new DBMS claim to be relational plus supporting extended features. eg. Postgres is a RDBMS with extended Object Oriented features. Codd's rule zero specifies a criteria for RDBMS: "For any system that is advertised as, or claimed to be, a relational database management system, that system must be able to manage databases entirely through its relational capabilities, no matter what additional capabilities the system may support." (Codd, 1990)

In Codd 1990, Codd extended the 12 rules to 18 to include rules on catalog, data types (domains), authorisation etc.

Bibliography

8.55 How do I link Informix tables to MS-Access?

On 16th August 2000 wsheldon@airflow-streamlines.co.uk (Wayne Sheldon) wrote:-

Just install and setup the Informix ODBC client to connect to your DB. Then choose File/Get External Data from the MS Access menu

You can then choose Import to download a table or the whole database, providing you have enough disc space.You can also choose link to tables, so you can update them directly.

On 30th August 2000 rbernste@alarismed.com (Rick Bernstein) wrote:-

You can define the Informix tables to MS Access as "linked" tables and connect to the Informix database using ODBC.

Install Informix SDK or Informix Connect on your client machine. Define an ODBC Data Source. In MS Access: File -> Get External Data -> Link Tables. Type of File: ODBC Databases

8.56 How does OnBar compare to Ontape?

On 8th September 2000 dagnew@charlottepipe.com (Doug Agnew) wrote:-

OnBar does see all the backups as being in "one place". It is the responsibility of the storage manager software (ISM, Legato Networker, Veritas NetBackup, Tivoli Storage Manager, etc.) to place the data on the media and keep track of its location.

Some managers, like Tivoli Storage Manager, make it very easy to store the log backups on disk and even provide facilities to automatically move them to tape when the disk gets close to full. The others that I have looked at also allow you to store backups on disk but don't automate the migration to tape.

All the managers I've examined divide the backup media into "storage pools" (whatever terminology they like) and allow the Informix backup process to designate which pool to use for each type of backup. Some even allow you to send dbspaces and/or different backup levels to different pools, if you really want to slice-and-dice.

OnBar's primary advantages, as far as I can tell:

Ontape's primary advantages are:

From what I've seen, ontape is suitable for databases up to 100G or so (depending upon how many tapes drives you want to buy or how much operator work you want).

OnBar is probably the backup of choice once you get much larger, both for operational and administrative reasons.

8.57 What are the main Informix bugs?

On 13th September 2000 djw@smooth1.demon.co.uk (David Williams) wrote:-

There is a bug in Informix 4gl 7.30.UC1, fixed in 7.30.UC4 which means that programs can produce core-dumps! This is due to a string-handling problem which means informix adds an additional space when joining strings together and hence can corrupt memory. This is fixed in 7.30.UC4

IDS 9.2x has a problem with datetime handling which can cause strange -1262 errors

On 11th November 2000 rbernste@alarismed.com (Bernstein, Rick) wrote:-

It sounds like you may be encountering Informix defect #115327.

Index pages can be incorrectly shrunk setting both leaf and node flags which will cause excess pages to be placed in the MED_HIGH buffer pool priority. (The fix deals with how the buffer manager sets the priority on the pages with 0xd0 type flag.)

When performance gets bad, run the following command:


onstat -P | tail -8

If the resulting output shows a large pct of the pages as "Btree", this is likely your problem. For example:


$ onstat -P | tail -8

Totals:  131000   113313   17087    600      0        432

Percentages:
Data  13.04
Btree 86.50
Other 0.46

One workaround is "export NOLRUPRIO=1" before starting your engine. It disables an IDS 7.3x feature, which assigns a "med-high" priority to some index pages (as shown by "onstat -R"). Newer IDS releases support a different environment variable "LRUAGE".

Before using the NOLRUPRIO environment variable, review your release notes and ensure that the following defect has been fixed:

111681 - NOLRUPRIO does not work and can hang the system.

I believe that it was fixed in IDS 7.31.UC4. (I can attest that it was not fixed in IDS 7.31.UC3 on AIX).

8.58 How do I debug onarchive failures?

On 2nd October 2000 bb22@uswest.net (Bill Border) wrote:-

Yippee!!! Another onarchive question.

See if there are any backups running now.
Run: onarchive> list/req=* and look for
EXECUTING requests. If there is one that
appears to be bogus:
onarchive> cancel/req=n
and try it again. If you set the
LOG=/opt/informix/etc/onarchive.log parameter
you will sometimes get better diag.
information.

8.59 How do I load blobs using dbaccess?

On 11th October 2000 dignacio@openratings.com (Don Ignacio) wrote:-

In order to load a blob into a table using dbaccess you need to call the function filetoclob or filetoblob, such as:


INSERT INTO candidate (cand_num, cand_lname, resume)
VALUES (0, 'Haven', FILETOCLOB('haven.rsm', 'client'))

INSERT INTO candidate (cand_num, cand_lname, resume)
VALUES (0, 'Haven', FILETOBLOB('haven..jpg', 'client'))

For more information regarding this function consult your Informix SQL Syntax Guide.

8.60 How do I identify outstanding in-place table alters?

On 19th October 2000 rbernste@alarismed.com (Bernstein, Rick) wrote:-

Informix Technical Support (case 200515) provided me with the following script to identify outstanding "in-place ALTERs". It runs quickly, even with 9,000+ SAP tables. I have successfully run it with IDS 7.31.UC3-1 and IDS 7.30.UC7XK.


# ksh script - run from ksh
tblpartnum=1048577

numdbs=`dbaccess sysmaster << !!! 2> /dev/null |grep -v max|awk '{print $0}'
select {+ full(sysdbstab)} max(dbsnum) from sysdbstab
!`

i=1
while (( i <= $numdbs ))
do
dbaccess sysmaster <<!
select hex(t1.pg_partnum), t1.pg_pagenum,t1.pg_physaddr,hex(t2.partnum),t3.
tabname
	from syspaghdr t1, sysptntab t2, systabnames t3
where 	t1.pg_partnum=$tblpartnum
	and t1.pg_flags=2
	and t1.pg_next !=0
	and t1.pg_physaddr=t2.physaddr
	and t2.partnum=t3.partnum
!
let i=i+1
let tblpartnum=tblpartnum+1048576
done

8.61 How do I identify Server Versions from SQL?

On 14th November 2000 Leonids.Voroncovs@dati.lv (Leonid Voroncovs) wrote:-


SELECT FIRST 1 DBINFO( 'version', 'full' ) FROM systables;
SELECT FIRST 1 DBINFO( 'version', 'server-type' ) FROM syscolumns;
SELECT FIRST 1 DBINFO( 'version', 'major' ) FROM sysindexes;
SELECT FIRST 1 DBINFO( 'version', 'minor' ) FROM systables;
SELECT FIRST 1 DBINFO( 'version', 'os' ) FROM syscolumns;
SELECT FIRST 1 DBINFO( 'version', 'level' ) FROM sysindexes;

8.62 How do I tell which session is using the most logical log space?

On 9th November 2000 rferdy@americasm01.nt.com (Rudy Fernandes) wrote:-

Here's a script that determines the session "occupying" the maximum log space. Its driven by the "onstat -x" command


#!/bin/ksh
# Extracts the following
#  1. The earliest log with an open trx.
#  2. The session information of the user who is running it

EARLY_WARNING=100
OPENTRX=`onstat -x | \
   grep "^.............-" | \
   grep -v "^................................0" | \
   awk '{print $5, $3}' | sort -n | head -1`

if [ "$OPENTRX" = "" ]; then
   echo No open transactions found.
   exit 0
fi

set $OPENTRX
EARLIEST_USED_LOG=$1
set `onstat -u | grep $2 | awk '{print $3, $4}'`
CULPRIT_SESSION=$1
CULPRIT_USER=$2
CURRENT_LOG=`onstat -l | grep "^.....................-C" | awk '{print $4}'`
LTXHWM=`onstat -c | grep "^LTXHWM" | awk '{print $2}'`
NO_OF_LOGS=`onstat -l | grep "\-.\-.\-" | wc -l`
LOGS_USED_BY_CULPRIT=`expr $CURRENT_LOG - $EARLIEST_USED_LOG`
PROBLEM_THRESHOLD=`expr $NO_OF_LOGS \* $LTXHWM \* $EARLY_WARNING  / 10000`

clear
echo "
Earliest Log with Open Transaction :  $EARLIEST_USED_LOG
Current Log                        :  $CURRENT_LOG
Logs used by Culprit               :  $LOGS_USED_BY_CULPRIT
User with Culprit Transaction      :  $CULPRIT_USER
Session with Culprit Transaction   :  $CULPRIT_SESSION

High Water Mark                    :  $LTXHWM Percent
Number of Logs                     :  $NO_OF_LOGS
Problem Threshold # of logs        :  $PROBLEM_THRESHOLD
"
if [ $LOGS_USED_BY_CULPRIT -gt $PROBLEM_THRESHOLD ]; then
   echo
   echo
   echo PROBLEMS AHOY!!!
fi


onstat -g sql $1

# Also
select logbeg, nlocks from sysmaster:systxptab
where logbeg > 0
order by logbeg;