If you've got any sorts or joins involved you could try SELECTing INTO a TEMPorary table.
Here's Mark Kellaway's conclusive proof that Informix uses bogo sort:
SELECT time_records.*, case_name FROM time_records, OUTER cases WHERE time_records.client = "AA1000" AND time_records.case_no = cases.case_no ORDER BY time_records.case_no
returns 34 sorted rows in 5 min 42.05 sec, versus:
SELECT time_records.*, case_name FROM time_records, OUTER cases WHERE time_records.client = "AA1000" AND time_records.case_no = cases.case_no INTO temp foo; SELECT * from foo ORDER BY case_no
returns 34 sorted rows in 0.59 sec
This happens using V4.00 informix + V4.00 SE on both ICL drs6000 and IBM rs6000.
This code looks great, but can perform quite slowly:
SELECT code FROM table1 WHERE code NOT IN ( SELECT code FROM table2 )
Walt Hultgren exposes a speedier, if more verbose, technique:
We have found here that using an approach that avoids NOT IN or NOT EXISTS results in much faster execution times.
In essence, you create a temp table of codes from the master table. The temp table also contains a flag column to indicate which codes are found in the detail table:
SELECT code, 0 flag FROM table1 INTO TEMP tflag;
Then, mark all of the codes found in the detail table:
UPDATE tflag SET flag = 1 WHERE code IN ( SELECT code FROM table2 WHERE tflag.code = table2.code );
Then, report all of the codes from the temp table that weren't marked:
SELECT * FROM tflag WHERE flag = 0;
You'd think that all of this processing would take longer, but we have found that not to be the case.
The fact that this method is much faster may just be a quirk of our database design or tuning. However, I've had good results with this in several similar but unrelated situations. Maybe the first approach would work as well if we indexed everything but that's a bad idea obviously.
As an aside to the aside, we came across this scheme while trying to improve the execution time on a script that looked for missing rows. I had originally started trying to improve things by investigating the execution times of a script using NOT IN versus one that used NOT EXISTS. The method I described above worked so much better, I stopped tinkering with the two NOT's before I found any clear indication that one might generically better than the other. If anyone can give me any guidelines in that area, I'd appreciate hearing from you.
"SET EXPLAIN ON" is an SQL command which creates a file, sqexplain.out, into which is placed diagnostic information about subsequent SQL commands. The 'cost' figure can be bizarre at times, but basically you want to eliminate sequential scans.
A handy facility to build into your 4GL programs is the ability to SET EXPLAIN ON based on an environment variable:
IF fgl_getenv("EXPLAIN_ON") IS NOT NULL THEN SET EXPLAIN ON END IF
what does fragments: 0 vs. fragments: 1 mean ?
On 3rd Dec 1997 mcollins@us.dhl.com (Mark Collins) wrote:-
This indicates that only one fragment will be searched in executing this query. If it required multiple fragments, you would see something like "fragments: ALL" or "fragments: 0,1". The number of the fragment that will be searched is 0 in the first case, 1 in the second case. Fragment numbers are determined by the order in which the fragmentation expression is specified. In other words, rows that satisfy the first expression are placed in fragment 0, the second expression in fragment 1, and so on. If you don't have the sql that created the table, you can do:
select evalpos, exprtext, dbspace from sysfragments
where tabid = (select tabid from systables
where tabname = "your_table")
and fragtype = "T"
order by evalpos;
The value in evalpos corresponds to the number shown in "fragments: 0" in sqexplain.out.
WHERE a = "B" OR a = "C"
will be slower than
WHERE a IN ("B","C")
Sometimes a UNION will work better than an OR as well.
(courtesy Dennis J. Pimple)
Index most columns included in WHERE criteria.
WHERE datecol >= "this/date" AND datecol <= "that/date"
will be slower than
WHERE datecol BETWEEN "this/date" AND "that/date"
(courtesy Dennis J. Pimple)
Here's a sick little script that works with sh/ksh. In Online, if you try UPDATE STATISTICS in a database with lots of tables... it blows-up. This script was written so that all the tables would be done on an individual basis rather than all at once.
: # update_em # Run UPDATE STATISTICS on a table by table basis # DATABASE=$1 if [ -z "$DATABASE" ] then echo "usage: update_em dbname" >&2 exit 1 fi isql $DATABASE - < <EOF 2> dev/null | isql $DATABASE - output to pipe "cat" without headings select "update statistics for table ", tabname, ";" from systables where tabid >= 100 order by tabname; EOF exit 0
Paraphrased from: Gary Broughton (garybro@ozemail.com.au) You should also be aware that the exit status of various releases of isql is unreliable. Instead of checking the exit status via $? a good idea is to redirect stderr to a file, and then grep that file looking for "error", eg:
# Generate the data isql -qr <<! >stage.rep 2>$stage.err database $database; select ... ! # Check for errors if grep -i "error" $stage.err >/dev/null then ...error_handler... fi
Question: Why can't I create a view with a calc column?
CREATE VIEW tst AS SELECT ship_charge - totval cout FROM orders WHERE ship_charge > 0;
Answer: You can.
CREATE VIEW tst (cout) AS SELECT ship_charge - totval FROM orders WHERE ship_charge > 0;
Unlike Online, the SE can only access data from one database at a time. Here are two techniques you can use to get around this limitation:
One technique to extract data from multiple databases involves the use of a temporary table created WITH NO LOG (ie: with transaction logging turned off).
CREATE TEMP TABLE work_table(<your table structure goes here> WITH NO LOG DATABASE first_database SELECT stuff FROM first_table INSERT INTO work_table DATABASE second_database SELECT more_stuff FROM second_table INSERT INTO work_table # Now we have data from "first_database" and "second_database" living # together in "work_table", so now we can play with that.
If you are running RDS 4.10 or earlier, and find yourself running into: "SQL Statement Error -410, Prepare statement failed or was not executed" on SQL commands which are NOT prepared you'll need to change the commands so that they DO use PREPARE and EXECUTE.
Because SE uses the Unix filesystem you can share tables between databases via symbolic links. Here's how:
Imagine your company consists of two branches, each with their own stand-alone database (because whoever designed the system didn't plan for expansion very well). One branch has a database "putururu", and the other is "tokoroa". Each has a copy of the "product" table, which you would like to share:
Assuming you have the following tree structure:
/home1/databases/putururu.dbs/product219.dat /home1/databases/putururu.dbs/product219.idx and /home2/databases/tokoroa.dbs/product098.dat /home2/databases/tokoroa.dbs/product098.idx
you would...
cd /home1/databases/putururu.dbs rm product219* ln -s /home2/databases/tokoroa.dbs/product098.dat product219.dat ln -s /home2/databases/tokoroa.dbs/product098.idx product219.idx
And voila!
Of course things start getting messy when you want to change the structure of your linked table - then you'll either have to start hacking systables, syscolumns, sysindexes etc etc or drop your linked version (in the case the "putururu" table), recreate it, and relink it.
echo "drop table product" | isql putururu (remove old table) dbschema -t product -d tokoroa product.sql (create schema for new table) isql putururu product.sql (creates new, empty, table) <link your new structure to the "tokoroa" product table, as in the previous example>
Using OnLine, you can access tables outside the current database using the notation:
SELECT * FROM dbname@server:owner.tablename
Note: Both engines must have the same logging strategy - on or off.
where the "@server" and "owner." parts are optional. Whether you can update the remote table depends on the version of OnLine, as in the table below. You can also use synonyms to represent remote tables. A synonym looks like a local table name but can resolve into a remote table name.
eg: CREATE SYNONYM Owner1.Table1 FOR Database2@Machine2:Owner2.Table2
Can a single transaction access a non-current database?
Online | Same Online System | Same Online System | Different Online Systems * | Different Online Systems * |
Version | Read | Write | Read | Write |
4.00 | Yes | No | Yes | No |
4.10 | Yes | Yes | Yes | No |
5.00 | Yes | Yes | Yes | Yes |
* Requires Informix-STAR.
Version 4.00 provides read-only access to other databases. If the other database is part of the same OnLine system, Informix-STAR is not required.
Version 4.10 provides read-write access to other databases which are located in the same OnLine system without requiring Informix-STAR. It provides read-only access to databases in other OnLine systems.
Version 5.00 provides read-write access to other databases. If the other database is not in the same OnLine system as the current (or home) database, then the system must use Informix-STAR. Version 5.00 automatically provides 2-phase commmit if a single transaction updates more than one database in more than one OnLine system.
tilh@sin-co.sin-ro.DHL.COM (Ti Lian Hwang) asks:
I've got a master invoice table "invmst", and a invoice line items table "invlin". They have a common key "invoice". I want to find records in the "invlin" table which has no corresponding "invoice" number in the "invmst" table.
perez@mgr.hjf.org (Charles Perez), replies:
The obvious way is to use a subquery:
SELECT * FROM invlin WHERE invlin.invoice NOT IN (SELECT invmst.invoice FROM invmst)
If performance is a problem, however, you may want to do it in two steps:
SELECT invmst.*, invlin.invoice inv_invoice FROM invmst, outer invlin WHERE (invmst.invoice = invlin.invoice INTO TEMP lone_invoices <create whatever indices on lone_invoices would make things efficient> SELECT <whatever you need> FROM lone_invoices WHERE (inv_invoice IS NULL)
When you get to serious row counts, subqueries are usually a BEAR. Doing the above could make a 4-hour query take 15 minutes or less. At least, it's worked that way for me.
Todd A Wallace (twallace@mason1.gmu.edu) asked:
I am trying to write an SQL statement that returns some fraction of all the rows it would normally return. For instance:
SELECT firstname, lastname, city, state FROM bigdatabase WHERE state = "TX"
Ron Whiteleather (ronw@boi.hp.com) replied:
To return _approximately_ a fraction of the rows, try adding:
AND rowid=(trunc(rowid/x)*x)where x is the 1/x fraction of rows you want returned. Again, this gives only an approximate fraction of rows and is dependent on how uniformly the data is physically distributed within the table. It does NOT guarantee any randomness.
eg: CREATE TEMP TABLE mytemp (prodno LIKE product.prodno desc LIKE product.desc) Try this out: SELECT prodno, desc FROM product WHERE ROWID = -1 INSERT INTO TEMP mytemp
Not quite as clear as my prefered syntax, but it works fine :-)
Note that the following suggestion is no longer recommended by Informix and may cause problems in the future. In fact in DSA version 7 it will NOT work as the "query re-write" feature eliminates duplicates in the where clause: From rizzo@fourgee.demon.co.uk Sun Apr 2 06:38:28 1995
How to convince the optimizer to use indexes YOU want:
For example : table a_table (a char(2), b char(2), c char(2), d char(2), e char(2)) Index : index_1 on a_table(a, b) Index : index_2 on a_table(c, d)
a and b contain only very few unqiue values, and as a result are 'top' heavy. c and d present a high "unique" values.
Select statement : SELECT * FROM a_table WHERE a = "1" AND b = "2" AND c = "3" AND d = "4"
Quiet often the optimizer will choose to used index_1. Rewriting the select thusly :
SELECT * FROM a_table WHERE a = "1" AND b = "2" AND c = "3" AND c = "3" AND c = "3" AND d = "4"
Produces a remarkable increase in speed (on one select we have the speed difference was from 62 minutes to 2 minutes) because it now uses index_2. (Apparantly by specifying the same line three times, it increases the filter selectivity rating).
Jonathan Leffler (johnl@informix.com) #include <disclaimer.h>:
To change the next inserted serial number to a value greater than its current value you can:
ALTER TABLE...MODIFY( ser_col_name SERIAL([new_start_number])
To change the next inserted serial number to a value LESS than its current value you need to first reset the serial number to 1:
INSERT INTO table (serial_column) VALUES (2147483647); INSERT INTO table (serial_column) VALUES (0); -- Back to 1 again!
...then perform an ALTER TABLE (as described above).
Francis Chue discovered that in her case "SELECT COUNT(DISTINCT xxx)..." took about THIRTY minutes, but when changed to "SELECT UNIQUE xxx INTO TEMP XXX " and "SELECT COUNT(*) FROM TEMP XXX" it took just 7 minutes.
One of the largest omissions in Informix's SQL is the lack of an "UPPER" function, which is commonly found in Other database products.
Here are a couple of solutions from June Tong which provide this functionality via stored procedures.
===========================================================================
Here are two ways to convert character strings to all upper case.
The first way consists of a combination of 3 stored procedures written by andersk@cybercom.se (Anders Karlsson) when he worked for Informix. It is the more flexible, as all three procedures are useful utilities on their own; however, it is also slower, since two stored procedures must be called for each character in the string.
For sheer speed, try the second way.
=================== SPL: Upper() =================== -- -- Procedure: GetCharAt() -- Get a character from a string at a specified position. -- DROP PROCEDURE GetCharAt; CREATE PROCEDURE GetCharAt(str VARCHAR(255), pos INTEGER) RETURNING VARCHAR(1); DEFINE i INTEGER; IF pos < 1 THEN FOR i = 2 TO pos LET str = str[2,255]; END FOR; END IF RETURN str[1,1]; END PROCEDURE; -- -- Procedure: Upper() -- Convert a string to uppercase. -- DROP PROCEDURE Upper; CREATE PROCEDURE Upper(str VARCHAR(255)) RETURNING VARCHAR(255, 0); DEFINE i INTEGER; DEFINE len INTEGER; DEFINE retstr VARCHAR(255); IF str IS NULL THEN RETURN NULL; ELSE LET len = LENGTH(str); LET retstr = ''; FOR i = 1 TO len LET retstr = retstr||ToUpper(GetCharAt(str, i)); END FOR; LET retstr = retstr[2,255]; -- BUG: req'd to strip off leading blank RETURN retstr; END IF; END PROCEDURE; -- -- Procedure: ToUpper() -- Convert a single character to upper case. -- DROP PROCEDURE ToUpper; CREATE PROCEDURE ToUpper(fromchar VARCHAR(1,1)) RETURNING VARCHAR(1,1); IF fromchar = 'a' THEN RETURN 'A'; ELIF fromchar = 'b' THEN RETURN 'B'; ELIF fromchar = 'c' THEN RETURN 'C'; ELIF fromchar = 'd' THEN RETURN 'D'; ELIF fromchar = 'e' THEN RETURN 'E'; ELIF fromchar = 'f' THEN RETURN 'F'; ELIF fromchar = 'g' THEN RETURN 'G'; ELIF fromchar = 'h' THEN RETURN 'H'; ELIF fromchar = 'i' THEN RETURN 'I'; ELIF fromchar = 'j' THEN RETURN 'J'; ELIF fromchar = 'k' THEN RETURN 'K'; ELIF fromchar = 'l' THEN RETURN 'L'; ELIF fromchar = 'm' THEN RETURN 'M'; ELIF fromchar = 'n' THEN RETURN 'N'; ELIF fromchar = 'o' THEN RETURN 'O'; ELIF fromchar = 'p' THEN RETURN 'P'; ELIF fromchar = 'q' THEN RETURN 'Q'; ELIF fromchar = 'r' THEN RETURN 'R'; ELIF fromchar = 's' THEN RETURN 'S'; ELIF fromchar = 't' THEN RETURN 'T'; ELIF fromchar = 'u' THEN RETURN 'U'; ELIF fromchar = 'v' THEN RETURN 'V'; ELIF fromchar = 'w' THEN RETURN 'W'; ELIF fromchar = 'x' THEN RETURN 'X'; ELIF fromchar = 'y' THEN RETURN 'Y'; ELIF fromchar = 'z' THEN RETURN 'Z'; END IF; RETURN fromchar; END PROCEDURE; =================== SPL: Upper() =================== -- -- Procedure: Upper() -- Convert a string to uppercase. -- DROP PROCEDURE upper; CREATE PROCEDURE upper (str VARCHAR(255)) RETURNING VARCHAR(255); DEFINE i INTEGER; DEFINE l INTEGER; DEFINE retstr VARCHAR(255); IF str IS NULL THEN RETURN NULL; ELSE LET l = LENGTH(str); LET retstr = ''; FOR i = 1 TO l IF str[1,1] BETWEEN "a" AND "z" THEN IF str[1,1] = 'a' THEN LET retstr = retstr || 'A'; ELIF str[1,1] = 'b' THEN LET retstr = retstr || 'B'; ELIF str[1,1] = 'c' THEN LET retstr = retstr || 'C'; ELIF str[1,1] = 'd' THEN LET retstr = retstr || 'D'; ELIF str[1,1] = 'e' THEN LET retstr = retstr || 'E'; ELIF str[1,1] = 'f' THEN LET retstr = retstr || 'F'; ELIF str[1,1] = 'g' THEN LET retstr = retstr || 'G'; ELIF str[1,1] = 'h' THEN LET retstr = retstr || 'H'; ELIF str[1,1] = 'i' THEN LET retstr = retstr || 'I'; ELIF str[1,1] = 'j' THEN LET retstr = retstr || 'J'; ELIF str[1,1] = 'k' THEN LET retstr = retstr || 'K'; ELIF str[1,1] = 'l' THEN LET retstr = retstr || 'L'; ELIF str[1,1] = 'm' THEN LET retstr = retstr || 'M'; ELIF str[1,1] = 'n' THEN LET retstr = retstr || 'N'; ELIF str[1,1] = 'o' THEN LET retstr = retstr || 'O'; ELIF str[1,1] = 'p' THEN LET retstr = retstr || 'P'; ELIF str[1,1] = 'q' THEN LET retstr = retstr || 'Q'; ELIF str[1,1] = 'r' THEN LET retstr = retstr || 'R'; ELIF str[1,1] = 's' THEN LET retstr = retstr || 'S'; ELIF str[1,1] = 't' THEN LET retstr = retstr || 'T'; ELIF str[1,1] = 'u' THEN LET retstr = retstr || 'U'; ELIF str[1,1] = 'v' THEN LET retstr = retstr || 'V'; ELIF str[1,1] = 'w' THEN LET retstr = retstr || 'W'; ELIF str[1,1] = 'x' THEN LET retstr = retstr || 'X'; ELIF str[1,1] = 'y' THEN LET retstr = retstr || 'Y'; ELSE LET retstr = retstr || 'Z'; END IF; ELSE LET retstr = retstr || str[1,1]; END IF; LET str = str[2,255]; END FOR; LET retstr = retstr[2,255]; -- BUG: req'd to strip off leading blank RETURN retstr; END IF; END PROCEDURE;June Tong:
I should mention that these both strip a leading character off the converted value before returning it. This is because of a bug in most currently- available versions where concatenating anything to an empty varchar variable caused the empty varchar to be expanded to a space, thus
DEFINE retstr VARCHAR(255); LET retstr = ''; LET retstr = retstr || 'A';would result in ' A'. As a result, before returning, the converted value is first stripped of its first character, as indicated by the comment:
LET retstr = retstr[2,255]; -- BUG: req'd to strip off leading blankIf you are using a version where this bug has been fixed, you should remove this line from the stored procedure. Be careful only to remove the line with the comment BUG -- other incidents of this statement are required for proper functioning of the stored procedures.
June
delete from sometable as a
where rowid <> (select min(rowid) from sometable where keycol = a.keycol)
If the table is fragmented and does not have rowids then
On 8th Dec 1997 johnl@informix.com (Jonathan Leffler) wrote:-
BEGIN WORK;
SELECT DISTINCT * FROM Table INTO TEMP Temp1;
DELETE FROM Table WHERE 1 = 1;
INSERT INTO Table SELECT * FROM Temp1;
COMMIT WORK;
This is a reasonable solution for small to medium tables when you have enough disk space available for the whole temporary table.
Michael Reed mreed@radiix.com writes:
There are at least three reasons to NOT create referential integrity via create table statements.
First, if you explicitly create the indexes, and then add the constraints, the existing indexes will be used. Just because you may want to drop RI at some point doesn't mean that you necessarily want the indexes to go away as well. Indexes will be dropped with the constraint if they were created implicitly. :-(
Second, should you fragment your tables, you will probably want to create your indexes explicitly and even detach them from the table. You can't do that implicitly.
Third, implicitly created indexes cannot be altered to cluster.
Pavel Kazenin (pavel@cso.spb.su) writes:
Unfortunately, there is no way to create a Referential Constraint without creating a pair of indexes ( below is extraction from "Guide to SQL, Reference", page 7-71 ):
" [...]
Primary key, unique, and referential constraints are implemented either as an ascending index that allows only unique entries or an ascending index that allows duplicates. When one of these constraints is placed on a column, the database server performs the following functions:
[...] "
Below is my humble opinion about Ref. Constraints:
Reasons to create them:
Reasons to NOT create them:
Apart from referential constraints there are other types of contraints :-
Note: Check contraints can be used to validate dates and help with the Year 2000 problem
In November 1997 johnl@informix.com (Jonathan Leffler) writes:
And don't forget that you could probably put a check constraint on most DATE columns, such as:
CREATE TABLE ... ( ... SomeDate DATE NOT NULL CHECK (SomeDate >= MDY(1, 1, 1970)) CONSTRAINT cN_datecheck, ... )
You can adjust the criteria appropriately, using some small set of control dates. You might also want to apply upper bounds, too, or instead. Eg:
BirthDate DATE NOT NULL CHECK (BirthDate BETWEEN MDY(1, 1, 1880) AND MDY(12, 31, 2010)) CONSTRAINT cN_datecheck,
tgirsch@iname.com (Thomas J. Girsch) writes:-
If you create an SQL text file, you can run it with dbaccess by doing:
$ dbaccess <database> <sql-file>This will run the SQL command without firing up the menu interface. Note, however, that all the statements will run, even if some in the middle fail. For example, if you run this batch that way:
BEGIN WORK;
INSERT INTO history
SELECT *
FROM current
WHERE month = 11;
DELETE FROM current
WHERE month = 11;
COMMIT WORK;
If the INSERT statement fails, the DELETE statement still executes, as does the commit work. This could be very bad. This behavior can be changed by setting an undocumented (to my knowledge) environmental variable, [Maintainers note n 18th Dec 1997 richard_thomas@yes.optus.com.au (Richard Thomas) corrected the name of the environment variable]
DBACCNOIGN=1
dberg@informix.com (David Berg) writes (paraphrased):-
Sysprocplan contains the optimized query plan for a procedure. That plan is updated at runtime if a structural change had been made to any database object in the query tree, or if UPDATE STATISTICS is run on any table in the query tree. Any time the plan is updated, locks must be placed on the respective rows in sysprocplan.
If a stored procedure refers to a temporary table then the query plan will be rebuilt every time it is run, this was fixed in 5.03.
NOTE: Every time you do UPDATE STATISTICS FOR TABLE x also do UPDATE STATISTICS FOR PROCEDURE y.
Another thing you can try is running the procedures with SET OPTIMIZATION LOW; this will tell the optimizer not to attempt reoptimizing the procedures at runtime, even if they would normally be reoptimized.
On 17th Dec 1997 johnl@informix.com (Jonathan Leffler) wrote:-
Assuming you are using either DB-Access or ISQl, the answer is to set the environment variable DBFLTMASK=6 to get 6 decimal places, as in:
CREATE TEMP TABLE t
(
col_a DECIMAL(8,4) NOT NULL,
col_b DECIMAL(8,4) NOT NULL,
col_c DECIMAL(8,4) NOT NULL
);
INSERT INTO t VALUES(1.2345, 3.4567, 5.6789);
SELECT (col_a + col_b) / col_c AS value FROM t;
value
0.826075
On 6th Aug 1998 Pete Stiglich wrote:-
SQRT(((SUM(X * X) * COUNT(X)) - (SUM(x) * SUM(x))) / (COUNT(x) * (COUNT(x) - 1)))
The COUNT(x) ensures that you only count the non-null X values, because they will have been ignored by the SUMs. This is the best-estimate of the standard deviation because of the -1 in the divisor.
This gave me the same answer (to 6 sig figs) as the built in STDEV() on a table with a number of different sets of data. The difference is that the SQRT function returns a FLOAT value but STDEV returns a DECIMAL(32).
On 11th September 1999 jleffler@earthlink.net (Jonathan Leffler) wrote:-
How can I convert datetime hour to second to integer amount of seconds from 00:00 ?
You can use:
((INTERVAL(0) SECOND(9) TO SECOND +
(dt - DATETIME(00:00:00) HOUR TO SECOND)) || "") + 0
Where dt is your DATETIME HOUR TO SECOND variable. The subtraction converts from a DATETIME to an INTERVAL, and the addition ensures that the result is an INTERVAL in seconds.
Beware that it is not documented anywhere that the result of adding two intervals takes on the type of the first operand, but that seems to work at the moment.
It is neater in a stored procedure; you declare the SP as returning an INTEGER, but you assign the expression with addition and subtraction to a string and simply return the string. Informix implicitly takes care of the conversions.
You can also go to the archives (either at the IIUG web site, http://www.iiug.org/, or at DejaNews, http://www.deja.com/ and dig out the old discussions on the subject. You should even find some ESQL/C or C code from me which explicitly returns mixed intervals into selected other units; it returns decimals rather than integers so that if you pass in an INTERVAL HOUR TO SECOND and ask for hours, you get back an appropriate fractional value. It also works around little issues such as the fact the you cannot have more than 10**9 seconds in an interval.
On 10th October 1999 djw@smooth1.demon.co.uk (David Williams) wrote:-
The only efficent way is to store the data in uppercase in an additional column which is indexed and search on that column. If datablades are available you could create a functional index on UPPER(mycol)
On 16th August 2000 djw@smooth1.demon.co.uk (David Williams) wrote:-
On UNIX create a shell script which sets up the environment and then uses
echo 'execute procedure fred()' | dbaccess
Use crontab -e (man crontab to see how) and create an entry to run this script at the required times
On Windows create a batch file to do the same thing and use system tool 'scheduled tasks' to run it when required
On 21st March 2000 jleffler@earthlink.net (Jonathan Leffler) wrote:-
No, it isn't.
DB-Access (and ISQL and I4GL and D4GL and ... anything else that implements LOAD or UNLOAD) implements LOAD and UNLOAD via subroutine library calls. The subroutines are not even the same in each product -- ISQL and I4GL share the code, but the others are separate.
It is not actually an SQL statement. You cannot use either LOAD or UNLOAD in any ESQL/C program as a simple ESQL/C statement. A close scrutiny of the manual marks it as DB-Access only.
On 8th June 1999 june_t@hotmail.com (June Tong) wrote:-
Quite simply, the old way of converting numeric to character was causing some tests to fail which some people thought should not fail. E.g. if you had "0001" in your character field, and compared it to integer 1: should they be equal or not? Well apparently some people (customers, I might add) thought they should, but if you convert the integer to char, and get "1", then they don't. So Informix "fixed the bug", and now characters get converted to integer, rather than the other way around. I suppose Informix could have simply been uncooperative like some other databases I'm working with now, which force you to call CONVERT or TO_CHAR or whatever and convert it yourself. That is still an option for you, TO_CHAR, I mean, if you want to change which field gets converted.
On 6th December 1999 rferdy@americasm01.nt.com (Rudy Fernandes) wrote:-
Informix's IDS2000 does support Statement Caching (see the ONCONFIG parameters STMT_CACHE & STMT_CACHE_SIZE). The default is "no caching). However, preparing and reusing statements (which is on a by-connection basis) does provide better results, depending on how often prepared statements are reused. In tests I carried out in the context of our application, this is how they stacked up
Arbitrary CPU units used by Informix to accomplish 1000 iterations of a set of SQL statements after establishing a connection
In other words, preparing/reusing made the application more than 3 times as fast as Vanilla and more than twice as fast as Engine caching. Caution : Results will vary [ :) Am I sounding like a salesman?]
In our 3 tier context wherein our Application Server establishes a handful of permanent connections to support the hundreds or thousands of end-users, preparing/reusing gives us significant benefits.
On 16th September 2000 obnoxio@hotmail.com (Obnoxio The Clown) wrote:-
LIKE is portable, MATCHES is not portable
On 20th November 2000 djw@smooth1.demon.co.uk (David Williams) wrote:-
SELECT DISTINCT is portable, SELECT UNIQUE is not portable
Also outer join syntax varies across engines