Tag Archives: i5

PHP CLI on IBMi PASE Memory Limit Problem (AIX OS via QP2TERM)

I ran into one of the hardest things to figure out the dreaded “Segmentation fault” and “Illegal instruction” while running php-cli in a QP2TERM session (a PASE/AIX/”IBMs unix” shell)The exact errors were:

php-cli[9]: 12345 Illegal instruction (coredump)  –when ran non-interactive

php-cli[9]: 12345 Segmentation fault(coredump) –when ran interactive

Note: php-cli is the shell script that calls the PHP interpreter on line 9 from the command line and 12345 is the AIX process id that had the issue.  Segmentation fault means you are addressing memory outside of your data segment which has a predefined size (256MB). coredump should be a data dump in the system log

In /usr/local/zendsvr/etc/php.ini I tried to increase the
memory_limit = 512M ; Maximum amount of memory a script may consume (512M). 

I tried to set the value in my script with
ini_set(‘memory_limit’, ‘512M’); 

I even tried to set it on the command line with the -d option
/usr/local/zendsvr/bin/php-cli -d memory_limit=512M myscript.php

I figured out the amount of memory my script was using by reducing the number of records it was processing and running the follow echo command to get the peak memory usage:

echo “Memory Peak Usage: “.(memory_get_peak_usage()/1024/1024).” MB”;

The actual memory bottle neck was happening further up the chain at the AIX process/job level.  The default memory limit of AIX process is 256MB with additional Data Segments of 256MB (Hex 0x10000000 2^8) with a max size of 8 additional data segments

Solution

Set the LDR_CNTRL environment variable in the parent process (PHP-CLI) to multiple data segments (in the example below 8 additional data segments of 256MB for 2.25GB of memory [this is the max…]) and then run your php script and then unset the memory limit so you don’t affect other processes.   Modify the shell script /usr/local/zendsvr/bin/php-cli and wrap the call to the php interpreter ($ZCE_PREFIX/bin/php “$@”) with the export and unsetting of LDR_CNTRL as shown below

export LDR_CNTRL=MAXDATA=0xB0000000@DSA
$ZCE_PREFIX/bin/php “$@”
unset LDR_CNTRL

putenv should probably not work because the parent process has to set LDR_CNTRL not the PHP script

putenv(“LDR_CNTRL=MAXDATA=0xB0000000@DSA”);

Use echo getenv(“LDR_CNTRL”); to see what its set to in your PHP script.

If you are using the FASTCGI w/ apache you can modify the config file (/www/zendsvr/conf/fastcgi.conf) and add to the end of  the line starting with Server type=”application/x-httpd-php” …

SetEnv=”LDR_CNTRL=MAXDATA=0xB0000000@DSA”

Caution

If you are hitting this limit you should probably look at the program you created because there may be something that is inefficiently using memory and that should be fixed instead of changing the memory limit.

What is this DSA

“The @DSA which can be appended to this value allows the boundary been private data and shared memory to be changed, allowing more segments to be used and the heap to start in segment 3. It also allows shared objects to be moved into segment 2 to give more contiguous space (See Figure 4).” – http://ibmsystemsmag.com/CMSTemplates/IBMSystemsMag/Print.aspx?path=/aix/administrator/systemsmanagement/Avoiding-Those–Segmentation-Fault–Failure-Messag

More info here: https://www.ibm.com/support/knowledgecenter/en/ssw_aix_61/com.ibm.aix.genprogc/lrg_prg_support.htm

These issues might also go away when we go from 32 bit to 64 bit.

Advertisements

Migrating from Zend Core for I to Zend Server for IBM I – My Experience

I’m currently working on migrating from Zend Core 2.6.0 to Zender Server 5.6  for IBM I.  Big thanks to Alan Seiden who has some very helpful blog posts on this topic.  I’d recommend checking out:

http://www.alanseiden.com/2010/04/21/differences-between-zend-core-and-zend-server-on-ibm-i/

and

http://www.alanseiden.com/2011/02/08/qa-upgrading-from-zend-core-to-zend-server/

Here’s my tip from migrating:

  1. If you were using the I5_* functions for database connections you can continue using AURA equipments toolkit, but I think long term you’d be better off using PHP db2_* functions.  Do not use the Zend Framework’s DB2 class since the db2_bind param doesn’t work.  The ZF team can’t implement it to work correctly right now and probably never will in the future.  I’ve been waiting 3 years now for them to make a change…
  2. Use http://as400:2001/HTTPAdmin to change the apache config for Zend Server and to start/stop the server
  3. You’ll need to trasfer your files from /www/zendcore to /www/zendsrv
  4. Give Permissions to QTMHHTTP.
    Run STRQSH
    cd /www/zendsvr/htdocs
    chmod –R 770
    chown -R qtmhhttp
  5. Modify the http.conf file and compare your old conf file to see if changes need to made
    /www/zendcore/conf/http.conf
    /www/zendsvr/conf/http.conf
  6. #–Check your system CCSID value ( dspsysval qccsid). if the value is 65535 then add the following two directives to Apache configuration file (/www/zendsvr/conf/httpd.conf) and then Stop and Start Apache:
    DefaultFsCCSID 37
    CGIJobCCSID 37
  7. Edit the php.ini file and add a different session path (edit /usr/local/zendsvr/etc/php.ini)
    session.save_path = “/tmp/ZS”
  8. Change scripts that reference www/zendcore to www/zendsvr
  9.  Recreate any NFS mounts since files might have moved into /www/zendsvr
  10. IF your using Zend Framework you might want to continue using the old version that Zend Core had, so modify your php.ini file include path to include it and not include the new version which is currently 1.11.10
    include_path = “.:/usr/local/Zend/ZendFramework/library:/usr/local/zendsvr/share/pear:/usr/local/ZendSvr/share/ToolkitApi”
Benefits of upgrading from Zend Core:
  1. PERFORMANCE!  I’m seeing scripts running between 18%-400% faster.  One script used to take 40 seconds now only takes 8 seconds.
  2. Only 1 apache configuration to worry about now
  3. Latest PHP

DB2 SQL – Select query to get somewhat unique record numbers

Many old tables cannot be changed because compiled RPG programs and display files rely on that table.  To change the table would require re-compiling the programs after modifying the programs to account for the changes to the table.  That rules out altering the table and creating a ROWID or IDENTITY field.

I tried to use the SQL statement: “SELECT row_number() over() FROM MY_TABLE” to get the row number, but that only returns a relative number and not the actual record id as you would see in the green screen.

Therefore to get the unique identifier for the row you can use RRN(MY_TABLE) (Relative Record Number Sequence) to get the record id.  This will only work if the table you are using won’t be reorg’ed.  This is how you would get the record id:

SELECT RRN(MY_TABLE) AS CUSTOMER_ROWID, MY_TABLE.*
FROM MY_TABLE

When you needed to update or delete that record you would do the following

UPDATE MY_TABLE
SET FIELD1='35'
WHERE RRN(MY_TABLE)=2;

Which would update the row with row id 2

Altering the Table to create a unique column

If you’re able to alter the table  you could add a key field with the identity column attribute, which will automatically generate the value/key. Remember that this requires you to re-compile ALL compiled programs (RPG , etc..) that use this table possibly. The value will be generated even if your using non-SQL interface like RPG,CL. You get to set the column name, data type (example below BIGINT), what number to start the count at (START WITH 1) and how much to increment the number (INCREMENT BY 1). Cycle means it will restart at 1 after it is at the highest value capable for BIGINT. Generate Always means the DB always generates the value, you could use Generate By Default, and it will only generate if there wasn’t a value given for the identity column.

ALTER TABLE CUSTOMER
    ADD COLUMN ID BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY
    (START WITH 1 INCREMENT BY 1 CYCLE);
SELECT ID,CUSTOMER.* FROM CUSTOMER -- See the ID col next to all the rows
--Inserting rows into a table with identity column
INSERT INTO CUSTOMER (NAME) VALUE('BOB');
INSERT INTO CUSTOMER VALUE(DEFAULT,'BOB');
--If you don't want the system to generate the values when your pulling from another table you can use OVERRIDING SYSTEM VALUE
INSERT INTO CUSTOMER OVERRIDING SYSTEM VALUE
   (SELECT * FROM THIS_WEEK_CUSTOMERS)
-- Or you can override and do it per insert statement
INSERT INTO CUSTOMER VALUE(32,'BOB') OVERRIDING SYSTEM VALUE;

You can also use a sequence object to re-sequence a preexisting ID field. Just make sure you ensure referential integrity.

CREATE SEQUENCE MYLIB.SEQNUM AS DEC(7, 0);
UPDATE MYLIB.MYTABLE SET KEYFIELD1= DIGITS(NEXT VALUE FOR MYSEQ);
DROP SEQUENCE MYLIB.SEQNUM;

--or

SELECT NEXT VALUE FOR MYLIB.SEQNUM AS NEXT_SEQ_NUM FROM sysibm.sysdummy1
--Use the NEXT_SEQ_NUM value

Set a Primary Key constraint to ensure uniqueness

The code below will add a primary key constraint on the column “ID”.  This will force uniqueness on that column and if an INSERT or UPDATE statements will fail if they try to add a non-unique id. The identity attribute only generates values but doesn’t enforce uniqueness.

ALTER TABLE MYLIB.MYTABLE ADD PRIMARY KEY (ID)

 

Note: the SQL function row_number() with over() can only be used in SELECT statements because of OLAP which makes it kind of worthless…

SELECT ROWID,KEYFIELD1,FIELD1
FROM (SELECT row_number() over (order by KEYFIELD1) as ROWID, KEYFIELD1, FIELD1
      FROM MY_TABLE) as Table1

An OLAP specification is not valid in a WHERE, VALUES, GROUP BY, HAVING, or SET clause, or join-condition in an ON clause of a joined table. An OLAP specification cannot be used as an argument of an aggregate function in the select-list.

I wish the DB2 UDB had an easier way of retrieving a unique record ID in sql.

Get Last Identity Id

IDENTITY_VALUE_LOCAL() – will return the last id generated by the identity column in a job

--Get the last id generated for this job
VALUES IDENTITY_VALUE_LOCAL() INTO :MYVARIABLE
-- Insert multiple records and get multiple ids back
SELECT ID FROM FINAL TABLE(/*MULTIPLE INSERT STATEMENTS*/)

DB2 Get next identity value

-- Get a good guess of what the next value will be:
SELECT TABLE_SCHEMA, TABLE_NAME, NEXT_IDENTITY_VALUE
FROM QSYS2.SYSPARTITIONSTAT
WHERE TABLE_SCHEMA = 'CUSTOMER';

Alternative Solution: UUID AKA GUID

A universally unique identifier (UUID) is a 128-bit number that is randomly generated so that it is nearly impossible to have a duplicate created with even hundreds of trillions of records your chance of a duplicate is 1 in a billion chance of a duplicate – https://en.wikipedia.org/wiki/Universally_unique_identifier In this technique the application can create the UUID and then save it to the database and not rely on the database to generate the unique identifier. For LARGE datasets that you need to sort or compare guids (used in a where clause, or a join, or an db index etc….) there will be a performance hit due to the randomness of the data and the value being a 128 bit value instead of a 32 bit value if you were to use an integer. Its easier to replicate a row that has a UUID since your not bound to the database handing out IDs. Another advantage is that the UUID is so unique that it could be used across multiple applications. One must weigh the pros and cons of both approaches and analyze the system they are building to see what is the best fit for their solution.