This chapter provides an overview of optimal general ledger performance and discusses how to:
Use non-shared tables.
Use indexes.
Use partition IDs.
Use Average Daily Balance (ADB) incremental calculations.
Update summary ledgers incrementally.
Optimize PS/nVision performance.
Many functions in General Ledger are run as background processes. If more than one process tries to use the same database table at the same time, contention can occur for the same row of data. This can cause rollbacks. Data retrieval can be slowed by queries on non-indexed data. Some strategies for avoiding system slowdowns are covered in this chapter.
Begin enhancing the performance of your online pages, Application Engine, SQRs, and COBOL background processes only after you:
Configure your ChartFields.
Populate your records with data.
Understand General Ledger background processing.
This section discusses how to:
Use PeopleTools temporary tables.
Use GL non-shared tables.
Set up GL non-shared tables.
Page Name |
Object Name |
Navigation |
Usage |
TEMP_TBL_STATS |
General Ledger, Monitor Background Process, Shared Table Statistics |
Access a log containing each time the process used a shared base working table because a non-shared table was not available. |
|
NONSHARED_TBL_ASGN |
General Ledger, Monitor Background Process, Non-Shared Table Maintenance |
Add, delete, or modify non-shared tables. |
PeopleTools provides a feature that allows applications to dedicate a specific instance of a PeopleTools temporary table for each GL Application Engine program run. This concept is similar to the GL non-shared table design in that it drastically reduces the risk of table contention. This feature is supported in some of the General Ledger Application Engine processes that use the PeopleTools temporary tables. These General Ledger Application Engine processes use the PeopleTools temporary tables:
Journal Edit.
Combination Editing.
Allocation Copy Utility.
Inter/IntraUnit Common Processor.
Journal Generator.
ADB Post and Calculation.
Budget Processor
Allocations
Summary Ledger Build.
See Enterprise PeopleTools PeopleBooks, PeopleSoft Application Engine, “Temporary Tables”
Application Engine and COBOL SQL processes use GL non-shared tables to increase performance.
The records PS_LEDGER (ledger) and PS_JRNL_LN (journal line), which include the PS_JRNL_HEADER (journal header) record are the most heavily accessed records. While these two records are the focus of this chapter, you can apply the same techniques to any record.
Several GL Application Engine and COBOL SQL processes use base working tables to process large volumes of temporary data. Base working tables are shared working temporary tables that can be used by more than one program to process temporary data. You can potentially hamper performance if you run concurrent processes that use the same base working table.
To increase performance, these processes use GL non-shared tables in place of the shared working tables:
Closing.
Journal Posting.
Ledger Load.
MultiCurrency.
Open Item Accounting.
GL non-shared tables are defined by appending the non-shared table version number, TEMP_TBL_NUM, from table TEMP_TBL_ASGNM to the base working table TEMP_TBL_NAME. Only the process that reserves the non-shared table can process transactions against that table.
The General Ledger Application Engine and COBOL SQL processes use table TEMP_TBL_ASGNM to identify and reserve a non-shared table. This table defines the fields in the TEMP_TBL_ASGNM table.
Column |
Description |
TEMP_TBL_NAME |
Base working table name. |
TEMP_TBL_NUM |
Non-shared table version number. |
IN_USE_SW |
Indicator to specify whether the non-shared table is in use. |
PROCESS_INSTANCE |
The process instance of the process that has the table reserved. |
Process for Assigning GL Non-Shared Tables
The process first determines if a non-shared table exists by matching the base working table name (shared table) of the processes to the base working table name in TEMP_TBL_ASGNM that has the IN_USE_SW indicator set to N:
If a non-shared table is available, it reserves it.
The process reserves the non-shared table by updating the IN_USE_SW indicator to Y in the TEMP_TBL_ASGNM table. No other process can use the non-shared table as long as the IN_USE_SW indicator is Y. The process updates the PROCESS_INSTANCE to identify which process has the non-shared table reserved.
If a non-shared table is not available, the process uses the base working table (instead of a non-shared table) and inserts a row into the TEMP_TBL_STATS table.
This table provides a way to monitor the load of base working tables. The statistics can be useful to decide whether more non-shared temp tables are needed. This table defines the fields in TEMP_TBL_STATS.
Column |
Description |
TEMP_TBL_NAME |
Base working table name. |
PROCESS_INSTANCE |
The process that attempted to reserve a non-shared table. |
DATE_ASSIGNED |
Process run date. |
The TEMP_TBL_ASGNM table is delivered with four non-shared tables for each base working table. If these do not meet your processing requirements, use the GL Non-Shared Table Maintenance page to add more non-shared table entries to TEMP_TBL_ASGNM.
Adding a Non-Shared Table
To add a non-shared table:
Determine if you need to create a non-shared table entry in TEMP_TBL_ASGNM.
On the Shared Table Statistics page, check the TEMP_TBL_STATS table to determine if the usage of the base working table justifies creating additional non-shared tables.
Create the GL non-shared table in PeopleSoft Application Designer.
Save the record definition from an existing non-shared table or the base working table. You should create the non-shared table from existing non-share tables rather than the base working table. Non-shared tables usually have less fields in the index structure that the base working table.
Create a GL non-shared table entry in TEMP_TBL_ASGNM using the GL Non-Shared Tables page.
Add a non-shared table for a custom base working table.
If you have created your own base working tables, add non-shared table entries to your Base Working Tables entries (repeat steps 1 to 3). Note that most non-shared tables do not require any unique indexes.
Viewing Shared Table Statistics
Access the Shared Table Statistics page.
This log helps you to determine whether you need to create additional non-shared table entries in TEMP_TBL_ASGNM.
Note. The process inserts a log in TEMP_TBL_STATS only if the non-shared table entries exist for the base working table in TEMP_TBL_ASGNM.
Record (Table) Name |
Enter the name of the shared table whose statistics you want to view. |
Non-Shared Tables |
Displays the number of non-shared tables currently created. |
Table Name |
Displays the base working table name. |
Process Instance |
Displays the process that attempted to reserve a non-shared table. |
Job ID |
Displays the Job ID of the process that attempted to reserve a non-shared table. |
Date Assigned |
Displays the run date of the process that attempted to reserve a non-shared table. |
Adding Instances of General Ledger Non-Shared Tables
Access the Non-Shared Table Maintenance page.
You add instances of non-shared tables to the TEMP_TBL_ASGNM table so that the non-shared table is available for use by the batch processes.
To add a new instance of a GL non-shared table, enter its name in the Table Name field and click the Search button. All instances of that table are displayed. Click the Add icon to add a new instance.
Table Name |
Enter or select the name of the table that you want to maintain. Leave this field blank to view all tables available for maintenance. |
Process Instance |
Enter or select a specific process instance to maintain. Leave this field blank to view all process instances available for maintenance. |
Reset |
Click to reset the In Use field to No. which removes the check mark from the check box. Select Reset regularly for all tables, but make sure that no one is using the system. If a process aborts, the In Use flag remains set to Yes. In order to free the table for other processes, the you must reset the flag to No. |
Warning! If you reset a process instance while it is running, it can cause data integrity problems.
Reset does not clear the contents from a table. However, the table is cleared the next time it is assigned.
Table Name |
Displays the name of a non-shared table. You can select another name. |
Number |
Displays the instance number of the non-shared table. You can change this number. |
In Use |
Indicates whether the non-shared table is being used in a process. Click Reset to remove the check mark and change the selection to N. Note. See previous Warning before you select Reset. |
Process Instance |
Instance number of the batch process that has reserved this non-shared table. |
Note. If you are using your own customized base working table, make sure the table name does not exceed 12 characters. The total character length of a table cannot exceed 15 characters, so you must allow for three characters for the non-shared table instances.
The following table identifies some of the General Ledger COBOL processes that use the General Ledger Non-Shared tables. The menu paths identify where the base working tables are assigned to the processes that support the use of non-shared tables, as well as the temp tables whose base table names are fixed.
In this table, if the temporary table is specified in the ledger template or some definition pages, the record name is listed in the Dynamic Working Tables column below; otherwise the record name is listed under the Fixed Working Tables column:
Process |
Menu Path |
Dynamic Working Tables |
Fixed Working Tables |
Closing |
General Ledger, Ledgers, Templates, Record Definitions |
Closing Tmp Closing RE Tmp Closing Account Tmp Closing Journal Header Tmp Closing Journal Line Tmp Closing Journal Line Tmp2 |
|
Ledger Load |
General Ledger, Ledgers, Templates, Record Definitions |
Led Load Temp Record Led Load Temp Record 2 |
|
Multicurrency |
General Ledger, Ledgers, Templates, Record Definitions |
MultiCurrency Tmp MultiCurrency Tmp1 |
TREE_SELnn_R |
Post Journals |
General Ledger, Ledgers, Templates, Record Definitions |
Ledger Tmp Ledger Tmp2 Journal Line Tmp |
GL_OI_TMP JRNL_HDR_SEL JRNL_HDR_TMP JRNL_VAT_TMP JRNL_XRF_TMP |
Open Item Reconciliation |
General Ledger, Ledgers, Templates, Record Definitions |
Journal Line Tmp |
GL_OI_TMP |
Close Budget |
N/A |
N/A |
CFV_SEL KK_RSCFV_SEL TREE_SELnn_R |
This section discusses how to:
Identify appropriate indexes.
Select indexes.
Make the rules practical.
Use indexes in the demo system.
Optimize indexes.
Depending on the database platform, volume and distribution of data, the correct index can speed processing greatly. As the volume of data in your ledger and journal tables grows, periodically review the plan and indexes to ensure that they remain efficient.
The indexes delivered with the demonstration database were created for small amounts of data and are specific to the delivered sample ChartFields—these might be inappropriate for your situation. Develop indexes specific to your own data and configuration.
Indexes are sets of information used to access data in a record. They are stored separately from the records but updated simultaneously when records are updated, inserted, or deleted. Each of your records should have one unique index. This provides a valuable edit function by preventing duplicate rows of data from being added to the database. Add more indexes to improve performance when locating and accessing the data.
When you create a record in PeopleSoft Application Designer, the system automatically creates a unique index with the same name as the record. So, journal line record (PS_JRNL_LN) is created with a unique index, usually named PS_JRNL_LN. This unique index includes all the keys that were identified on the record. The combination of these keys should identify one unique row of information.
PeopleSoft Application Designer sometimes creates additional indexes when you add a record. These are generated from fields identified as alternate search keys on the record itself and are not included in the unique index mentioned above. For example, the GL_ACCOUNT_TBL has Alternate Search Keys designated on Account Type and Description fields. When the table is created in Application Designer, two additional indexes (PS0GL_ACCOUNT_TBL and PS1GL_ACCOUNT_TBL) is created with each of these fields listed. These additional indexes are always termed duplicate indexes, because they may not point to unique rows of data.
General Ledger includes system tables that enable you to view the created indexes and their columns.
Here we discuss the system analysis you need to do before you implement special indexes.
Now that you know the indexes that General Ledger creates, determine whether the delivered indexes are suitable or you need additional indexes. Changes to the ChartFields, changes in configuration, and differences in data content all affect the indexes and their effectiveness.
First, determine the indexes currently used by your system. Do this by capturing the SQL statements executed by the system, then running them in isolation to determine the database access path for retrieving the data. For either real-time on-line access or batch processes, you can identify the SQL statements that access the ledger and journal line tables and whose performance might be a concern. Refer to PeopleTools PeopleBooks for information about turning on the SQL trace for on-line and batch processes.
See Enterprise PeopleTools PeopleBook: PeopleSoft Process Scheduler
See Enterprise PeopleTools PeopleBook: PeopleSoft Server Tools
Next, determine the efficiency of your current indexes; you need to establish a method for measuring progress as you make changes. A baseline timing is generally used for comparison when trying out different indexes. Time either the individual SQL statements or the entire process, so long as you have some way of determining progress as you proceed with the tuning.
You have a list of processes that access the primary records. You now need to determine which indexes each process currently uses. In other words, you need to determine the database access path that the system takes when the statement is actually executed. Because the database access path might change according to differing volumes of data, it is important to execute the plan on approximately the same amount of data that the table contains in a production environment. It might be appropriate to take a copy of your production database specifically for the purpose of tuning the indexes. Generally, when obtaining plan information, you are not actually executing the statements; check your database administrator documentation to be sure this is the case before executing any statements in your production environment.
Each platform has a process for determining the database access path that the engine uses to access data for the SQL statement. Below we present a brief outline of the DB2 approach.
Note. Refer to your database administration documentation for your platform and consult with your database administrator.
If your system is on DB2, create a PLAN_TABLE if your database does not already have one. A sample CREATE statement is in your DB2 Performance Tuning documentation.
Include the SQL statement in the following and execute it:
DELETE FROM PLAN_TABLE WHERE QUERYNO=nnn; EXPLAIN PLAN SET QUERYNO=nnn FOR statement;
In this statement, nnn is a number you assign to this statement.
Retrieve the plan from PLAN_TABLE with the following SELECT:
SELECT QBLOCKNO, PLANNO, TNAME, ACCESSNAME, METHOD, ACCESSTYPE, MATCHCOLS, INDEXONLY, PREFETCH, SORTC_GROUPBY FROM PLAN_TABLE WHERE QUERYNO=nnn ORDER BY QBLOCKNO, PLANNO;
The table contains other plan information, but these are the most pertinent columns for your purposes.
Note. If your system is running Oracle, consult with your database administrator to create a plan_table if your database does not already have one.
Before determining whether the index is appropriate, you need to know how the database engine selects indexes. In general, consider these basic rules:
The columns in the “where” clause are used when deciding on an index.
On most platforms, the database engine takes equality statements, “like” statements, and less-than and greater-than statements into consideration. For example, in the statement “where business_unit = 'NEWGN' and accounting_period >= 1 and <= 12.” the application engine uses both the business_unit and accounting period when it accesses the data. With a “like” statement, if a specific value is passed, the system uses it to select an index; however, if the field contains a wild card ('%'), the system ranks the column lower in priority when it determines an index:
Each platform has specific index limitations.
For example, SQLServer and Oracle platform indexes do not perform well with “NOT = ” or “!=” statements. In DB2, any column after a range (>, <) is not used when the system selects an index. Consult your platform system administration documentation for the specific index limitations on your system.
The system looks at cardinality, which refers to the number of unique values in a column.
For example, if you only have one business unit in your organization, the business_unit column in the ledger record only has one value in it—very low cardinality. In the demo database, the account column is always entered and has many unique values, so the cardinality is fairly high.
To determine cardinality on a particular ChartField, issue a SQL statement that selects count(*) from the table in question. The value returned is the number of entries in the record. In general, high cardinality fields should be included in the index.
The columns that are used to join records should generally be included in an index.
These are the fields in a “where” statement used to join one record to another. These columns tend toward low cardinality, and the optimizers do not rate equality to another column nearly as high as equality to a bind variable. For these reasons, columns used to join tables are usually in the unique index but generally are not included in all other indexes.
The system only uses an index up to the point that a column in the index is not included in the “where” clause.
For example, if the Journal Line record has an index that includes business unit, journal ID, and journal date, but the “where” clause includes only business unit and journal date, the index is only effective for the business unit. The journal date provided is ignored because the journal ID information is not included in the “where” clause. For this reason, the sequence of the fields in the index is very important.
The system uses the size of the record and the selectiveness of the index to determine whether the index or full-table scan is more efficient.
This is sometimes referred to as the filter factor. The effective filter factor for an index is the combined cardinalities for the index columns actually used in a particular access.
For example, if an index is built over FISCAL_YEAR, LEDGER and ACCOUNT, and the table contains four years, five ledgers, and 800 accounts, the potential filter factor is 1/(4*5*800), or 1/16000, or 0.0000625. (In a real-world data distribution, the filter factor would not be this good, but it would still be quite good unless the data is very skewed.) However, if the ACCOUNT field in the index could not be used because of the nature of the criteria for it, the filter factor would be only 1/20, which is not very selective. In general, an index should point to around 10% - 15% of a record in order to be efficient.
To put these rules to practical use, you need to identify SQL statements that performed badly, and examine each “where” clause in those SQL statements. What you are trying to obtain from each SQL statement are the columns, accessed in the “where” clause, that you believe are acceptable to the system for index selection. In other words, you get to count all the columns that use an equal value, but none of the columns using “between” logic, and so on. To logically view this huge amount of information, it is best to make a chart with the columns in the “where” clause across the top and the origin of the SQL statement down the left side. As you go through each SQL statement, place an X in the box under the column that the index is likely to use. Create a chart for each record that concerns you.
Viewing a Sample "Where" Clause Chart
The following is a sample "where" clause chart for the ledger record with SQL statements generated from the demo database:
|
BU |
LED |
ACCT |
DEPT |
STAT |
FY |
PERIOD |
CURR |
ONLINE |
||||||||
Budget Detail |
X |
X |
X |
X |
X |
X |
X |
|
|
X |
X |
X |
X |
X |
X |
X |
X |
Budget Copy |
X |
X |
X |
|
||||
X |
X |
X |
X |
|||||
Budget Spread |
X |
X |
X |
X |
X |
X |
X |
|
X |
X |
X |
X |
X |
X |
X |
X |
|
Ledger Inquiry |
X |
X |
X |
X |
X |
X |
||
X |
X |
X |
X |
X |
X |
X |
||
X |
X |
X |
X |
X |
X |
X |
||
X |
X |
X |
X |
|||||
InterUnit Inquiry |
X |
X |
X |
X |
X |
|||
TimeSpan Inquiry |
X |
X |
X |
X |
X |
X |
||
Journal Entry |
X |
X |
X |
X |
X |
X |
X |
X |
REPORTS |
||||||||
GLS7004 |
X |
X |
X |
X |
X |
|||
GLS7005 |
X |
X |
X |
X |
X |
|||
GLS1003 |
X |
X |
X |
X |
X |
X |
||
GLS7002 |
X |
X |
X |
X |
||||
GLS7003 |
X |
X |
X |
X |
X |
|||
BATCH |
||||||||
Posting |
X |
X |
X |
X |
X |
X |
X |
X |
Closing |
X |
X |
X |
X |
||||
|
X |
X |
X |
X |
||||
|
X |
X |
X |
X |
X |
|||
|
X |
X |
X |
X |
X |
|||
|
X |
X |
X |
X |
|
|||
Summary Ledger |
X |
X |
X |
X |
||||
|
X |
X |
X |
X |
||||
Consolidate |
X |
X |
X |
X |
X |
|||
|
X |
X |
X |
X |
X |
X |
||
Currency |
X |
X |
X |
X |
X |
X |
X |
X |
Armed with these charts and the rules of indexing, now work to create indexes that access the records more quickly. Looking across the columns used in ledger “where” clauses, assess the viability of each column.
Business unit is included in every “where” clause, but in the demo database there are only 79 of them. One of these, US001, is used much more frequently than in the others, so the cardinality is relatively low. Because it is always used, you will probably include it in indexes.
The ledger column is also included in each clause, but the cardinality is low (three are used in the LEDGER table and one used the majority of the time).
Account is used in a good percentage of the “where” clauses and is required in most of the online inquiry transactions. The cardinality is also high (735 unique values of account in the ledger table in the demo database), so this is a good possibility in an index.
Other ChartFields, including DEPTID, PRODUCT, and PROJECT, are lumped together because the demo database does not require them and accepts a wildcard in their place on the inquiry pages. This wildcard generates a “like” SQL statement that works well if you supply the field with a value; it is less efficient if the field is left as a wildcard ('%'). If you have ChartFields that you always enter, you should include these in the index in the same way the account field is included. You might also want to consider making any “always enter” fields required on the inquiry pages to make the select statements more efficient.
Fiscal year is included on nearly every “where” clause. At present the cardinality is relatively low (3 - 4 different values); however, expect it to increase as time goes by. Accounting period is used on a good number of “where” clauses, again with limited cardinality.
Currency code is included in many of the “where” clauses. There are many values in the currency code record, but in practice the vast majority of transactions in the ledger record have a currency code of USD, so the cardinality of this field is also relatively low. Therefore, this column might not be included in most indexes.
The following hints can help you create better indexes:
Strive for the minimum number of indexes to accomplish the processes.
Each index has to be updated every time an update, insert, or delete is performed on the underlying table; so each index has an overhead cost associated with it.
In considering the right number of indexes for a table, be sure to consider the use of the table. Fairly static tables (like Chartfield tables) can have numerous indexes with relatively little negative impact because they are frequently accessed and rarely updated. Other tables, however, are updated continually and each additional index could make quite a difference in the amount of time it takes to perform these functions.
These extra indexes on fairly static tables (like Chartfield tables) are not a problem.
However, if there are list items designated on records that are never used as edit (prompt) tables and the index generated is not assisting any processing, you have actually created additional overhead during record updates and inserts without any benefit. The bottom line is that you should carefully consider designating fields as alternate search key fields.
Because the vast majority of “where” clauses that access the PS_LEDGER table begin with equality checks on business unit and ledger, these common fields are included at the beginning of most of the Oracle indexes.
Sometimes it is beneficial to put a column in the index that would not usually be included in a “where” clause but is usually retrieved from the table when the table is accessed.
An example of this is the account type on the GL_ACCOUNT_TBL.
This column is generally accessed when the table is queried, and adding this column to the index might prevent table access when only the account type is needed. The Alternative Search Key indexes actually do this for us in most cases, because these indexes generally contain descriptions, and this information is frequently accessed when a code table is accessed. This approach is only useful if it prevents table access in some instances and does not interfere with the normal operation of the index in other situations.
For this reason, these columns are generally at the end of the indexes.
Some customers have experienced an improvement in background processing against the ledger record when the posted total amount field is added to the end of the duplicate indexes, because it results in an index-only scan. During testing on the demo database, there was some negative impact on the online performance, so this field was not added to the delivered indexes. But it might be worth testing in your production environment.
The system is specific about the indexes chosen.
Sometimes the most well thought-out index does not get used as expected or does not yield the expected results. Test the new index, taking a look at the plan to be sure it is used, then take another timing to compare the new index access with the original baseline timing. Based on the results, you might need to adjust the sequence of the columns in the index or the columns included in the index to find the optimal combination.
Once you find the best combination for the SQL statements under review, run through all the processes again. Sometimes one new index can cause changes in the indexes used by other processes. Often the change is good, but sometimes it is worse, and more evaluation is required.
PeopleSoft software provides the sample database with the indexes listed below. The columns in the indexes have been noted, followed by a brief explanation of why each index is included and how it affects performance. These indexes are included in PeopleSoft Application Designer and are created when indexes are created for the record.
Note. It might be beneficial to drop these indexes before performing a large load or update. Loading data into a table means that each of the indexes on the table need to be updated; this can amount to considerable overhead when many rows are inserted. It might be beneficial to drop these indexes, load the data or perform the background process, and then run the script again to recreate the indexes on the tables.
PS_LEDGER: (All Platforms)
PS_LEDGER |
PSCLEDGER |
PSFLEDGER |
BUSINESS_UNIT |
FISCAL_YEAR |
ACCOUNT |
LEDGER |
LEDGER |
ACCOUNTING_PERIOD |
ACCOUNT |
BUSINESS_UNIT |
PRODUCT |
ALTACCT |
ACCOUNT |
FISCAL_YEAR |
DEPTID |
||
OPERATING UNIT |
||
PRODUCT |
||
FUND_CODE |
||
CLASS_FLD |
||
PROGRAM_CODE |
||
BUDGET_REF |
||
AFFILIATE |
||
AFFILIATE_INTRA1 |
||
AFFILIATE_INTRA2 |
||
PROJECT_ID |
||
BOOK_CODE |
||
GL_ADJUST_TYPE |
||
CURRENCY_CD |
||
STATISTICS_CODE |
||
FISCAL_YEAR |
||
ACCOUNTING_PERIOD |
PS_LEDGER
This is the original, unique index from PeopleSoft Application Designer, which was left in place as the unique index. Change this index to reflect your own ChartField configuration. Because it is generated from Application Designer, making the record changes to the LEDGER record should produce the correct index for you.
Some platforms have a 16-column limit for indexes and this index already has 21 columns, and is not supported for some platforms (namely Microsoft SQL Server, DB2/Unix, and Informix). PeopleSoft software resolves this by creating 'Functional Indexes' behind the scene with index based on a field which is the concatenation of all the key fields.
While no PeopleSoft processes rely on the presence of the unique index on a table, your database manager should carefully consider any decision not to have one. A unique index is a data safeguard enforced by your database engine. It guards against duplicate data resulting from a process that does not work correctly or from invalid data in an SQL script.
Note. If you are a DB2 customer and you want to partition the dataset based on fiscal year, you might want to delete this index and replace it with a unique index that has Fiscal Year as the leading field.
See Enterprise PeopleTools PeopleBooks: PeopleSoft Application Designer, ”Understanding Functional Indexes and Planning Records, Control Tables, and TableSets”
PSALEDGER (All Platforms Except Oracle)
Experienced General Ledger customers recognize this index. This is the most efficient index for PS/nVision reporting and also helps out the Closing and Summary ledger COBOL processes. Note that it is similar to the PSBLEDGER index except that it starts with Fiscal Year (so those processes that do have an equality for accounting period choose the “B” version over this one). The index also includes the Account field, thereby adding efficiency for any “where” clause selecting specific account values. This is an index you probably have to modify to reflect your own ChartField configuration. The last fields of this index should include the ChartFields with the highest cardinality that are usually entered and used in “where” clauses. Avoid adding all your ChartFields because that would create a great deal of overhead when any of the ChartFields are added or changed in the ledger. It is usually best to include the minimum number of fields to do the job in an index.
Note. If you are doing PS/nVision reporting on your summary ledger, be sure to create this index for the summary ledger record. See the notes under the PSBLEDGER index for additional thoughts on this topic.
PS_LEDGER (All Platforms Except Oracle)
PSALEDGER |
PSBLEDGER |
ACCOUNT |
ACCOUNTING_PERIOD |
FISCAL_YEAR |
FISCAL_YEAR |
BUSINESS_UNIT |
LEDGER |
LEDGER |
BUSINESS_UNIT |
The leading field on this index is the Account field. This index helps speed performance on all processes that access the ledger using a specific account selection. This includes the Trial Balance and General Ledger Activity SQRs, as well as the COBOL processes of Closing and Consolidations. Online, this index helps in the budget entry process and the ledger inquiry pages. You need to modify this index for your own ChartField configuration. The leading fields on the index should be the ChartFields that are always entered and have the greatest cardinality. Because the Fiscal Year, Business Unit, and Ledger fields are consistently requested with the Account field, they are also on the index and should be appropriate on your version of the index.
PSBLEDGER
This index begins with the Accounting Period field and is called into use when the accounting period and fiscal year are specified without specific ChartField references. This enhances performance in the Closing Trial Balance SQR, Closing and Summary Ledger processes, and the online budget copy process. You should be able to use this index as is without modifications.
Note. The summary ledgers delivered with the demo database (PS_S_LEDGER_ACCTS, S_LEDGER_ACTDEP, and PS_S_LEDGER_SUM) realized performance gains in the summary ledger background process when this index was created for them. The “B” type of index might be beneficial for your own summary ledgers. Because this index is similar to the PSCLEDGER index, you might want to do some timings and analysis before deciding if the additional indexes on your own summary ledgers are worth the additional overhead during inserts and updates.
PSDLEDGER |
PSELEDGER |
BUSINESS_UNIT |
FISCAL_YEAR |
LEDGER |
ACCOUNTING_PERIOD |
FISCAL_YEAR |
BUSINESS_UNIT |
ACCOUNTING_PERIOD |
LEDGER |
CURRENCY_CD |
ACCOUNT |
STATISTICS_CODE |
PRODUCT |
ACCOUNT |
PROJECT_ID |
AFFILIATE |
|
CURRENCY_CD |
PSDLEDGER (Oracle Only)
This index is used in the same way the PSALEDGER index is used on the other platforms—to optimize performance in online processes, SQRs, and COBOL processes when the entire ledger key is not specified. Specifically, the Trial Balance and General Ledger Activity SQRs, the Closing and Consolidations COBOL processes, and the budget and ledger inquiry online pages use it. The index leads off with the common fields of Business Unit and Ledger and includes more of the “where” clause columns than its PSALEDGER counterpart. As in the PSALEDGER version, when building this index on your production system you should change the Account field in the demo database to be the ChartFields you always enter that have the highest cardinality.
PSELEDGER (Oracle Only)
The closing process Closing Trial Balance SQR and the closing COBOL process primarily use this index. The summary ledger COBOL process also favors it. If you do not run either of those processes (or run them infrequently), you might not need this index. To modify the index, replace the demo ChartFields with your own. The leading ChartFields should be those you always enter that have the highest cardinality.
PSWLEDGER (Sql Server, DB2/UDB for Linux, Unix, and Windows, Informix)
The PSW<record name> index is used to build the search index for database platforms that have the 16–column limit on indexes. This index will not be a unique index, instead, the concatenated 'Functional Index' plays the role of the unique index.
This index will be created with the first 16 key fields of the functional index. The recommendation is to customize the index to your need as followed: The first 5 columns of the PSW<record name> index should be: Business_unit, Ledger, Fiscal_year, Accounting_period, Account. The subsequent 11 columns should consist of the other ChartFields that are always entered and have the greatest cardinality. This index is very crucial to the performance of Post Journals and Post Budget Journals processes.
PS_JRNL_HEADER (All Platforms)
The same analysis processes were applied to the JRNL_LN and JRNL_HEADER records, and the following indexes are delivered with your demo database as a result of this study:
PS_JRNL_ HEADER |
PSCJRNL_ HEADER |
PSDJRNL_ HEADER |
BUSINESS_UNIT |
PROCESS_INSTANCE |
JOURNAL_ID |
JOURNAL_ID |
JRNL_HDR_STATUS |
JOURNAL_DATE |
JOURNAL_DATE |
JRNL_PROCESS_REQST |
BUSINESS_UNIT_IU |
UNPOST_SEQ |
UNPOST_SEQ |
PS_JRNL_HEADER (All Platforms — Continued)
PSGJRNL_HEADER |
PSHJRNL_HEADER |
PSIJRNL_HEADER |
PROCESS_INSTANCE |
PROC_PART_ID |
BUSINESS_UNIT |
SYSTEM_SOURCE |
BUSINESS_UNIT |
PROCESS_INSTANCE |
LEDGER_GROUP |
LEDGER_GROUP |
|
JRNL_PROCESS_REQST |
JOURNAL_ID |
|
JOURNAL_ID |
JOURNAL_DATE |
|
UNPOST_SEQ |
PS_JRNL_HEADER
This is the unique index created by PeopleSoft Application Designer. It is used each time the journal is referenced by the key values. This includes the OpenItem and SJE Status SQRs, posting and journal edit processes, and online journal inquiry and entry processes. You should not need to change this index for any ChartField configuration.
PSCJRNL_HEADER
Because the leading field on this index is PROCESS_INSTANCE, the index helped speed processing in the journal posting and journal edit jobs where the statements select from the journal header based on PROCESS_INSTANCE. If you run those jobs, you want this index. You should not need to change it for any ChartField configuration.
PSDJRNL_HEADER
This index is used by the online system to obtain the journal headers and journal lines for InterUnit subjournals.
PS_JRNL_HEADER (All Platforms Except Oracle)
PSAJRNL_HEADER |
PSBJRNL_HEADER |
ACCOUNTING_PERIOD |
JRNL_PROCESS_REQST |
SOURCE |
BUSINESS_UNIT |
FISCAL_YEAR |
LEDGER_GROUP |
BUSINESS_UNIT |
|
LEDGER_GROUP |
PSAJRNL_HEADER
This index is selected when the accounting period, source, and fiscal year are specified in the “where” clause. The sequence of the columns was chosen by cardinality for the demo database (source has seven unique values and accounting period has 12). You need to analyze your own system to determine which column should come first—source or accounting period. Choose the column with the greatest cardinality (unique values). This index helps speed processing in the Trial Balance and General Ledger Activity SQRs and in the online inquiry and journal unpost functions.
PSBJRNL_HEADER
This index helped the Posting process by indexing on the JRNL_PROCESS_REQST field. You should not need to change this index for ChartField configuration.
PS_JRNL_HEADER (Oracle Only)
PSEJRNL_HEADER |
PSFJRNL_HEADER |
BUSINESS_UNIT |
JOURNAL_ID |
LEDGER_GROUP |
SOURCE |
JRNL_HDR_STATUS |
JRNL_HDR_STATUS |
FISCAL_YEAR |
BUSINESS_UNIT |
ACCOUNTING_PERIOD |
LEDGER_GROUP |
PSEJRNL_HEADER (Oracle Only)
This index assists the processing of the SJE Status SQR and the Journal Post and Allocations processes. No modifications should be necessary for ChartField changes.
PSFJRNL_HEADER (Oracle Only)
The only process that uses this index is the Allocations process. If you do not run that job, you should not need the index.
PS_JRNL_LN (All Platforms)
PS_JRNL_LN |
PSDJRNL_LN |
PSFJRNL_LN |
BUSINESS_UNIT |
PROCESS_INSTANCE |
JOURNAL_DATE |
JOURNAL_ID |
BUSINESS_UNIT |
BUSINESS_UNIT |
JOURNAL_DATE |
ACCOUNT |
UNPOST_SEQ |
UNPOST_SEQ |
JOURNAL_ID |
|
JOURNAL_LINE |
JRNL_LN_SOURCE |
|
LEDGER |
PS_JRNL_LN
This is the unique index created by PeopleSoft Application Designer. It matches the keys on the parent record (PS_JRNL_HEADER) with the addition of the Journal Line field. This is used in processing when you update the journal line in the Posting and Journal Edit programs, and online Posting and Journal Update processes.
PSDJRNL_LN
This index assists the batch processes in accessing the journal line information. It is used in the Journal Edit and Journal Combo Edit processes, and should not require modification for ChartField changes. If the index is not chosen by the optimizer, update statistics for the table with histogram information.
For Oracle run the following command: Analyze table PS_JRNL_LN compute statistics for columns (process_instance).
For DB2/Unix, run the following command: Runstats on table <owner>.PS_JRNL_LN with distribution and indexes all.
PSFJRNL_LN
This index is used by the GLAJES SQR report, Posting, Consolidations, Currency Translation, and Allocations processes, as well as the online inquiry and posting pages.
PS_JRNL_LN (All Platforms Except Oracle)
PSAJRNL_LN |
PSBJRNL_LN |
ACCOUNT |
JOURNAL_ID |
BUSINESS_UNIT |
JOURNAL_DATE |
CURRENCY_CD |
BUSINESS_UNIT |
UNPOST_SEQ |
PSAJRNL_LN
This index facilitates those queries that look for a specific match on the Account ChartField. It includes the additional fields of Business Unit and Currency Code, because those fields are consistently included in the "where" clauses of these statements. It enhances performance in the GLAOITEM SQR, Journal Posting COBOL process, and the online Journal Inquiry and Unposting processes. If changes are made to the ChartFields, you need to modify this. You should include the ChartField that is always entered and has the greatest cardinality in this index.
PSBJRNL_LN
The items in this index are similar to the unique index except that the sequence is different and the JOURNAL_LINE field is left off. The JOURNAL_LINE field was eliminated because it is almost never referenced in “where” clauses. The sequence was changed because JOURNAL_ID is a high-cardinality field and is frequently referenced in select statements. In the select statements that specify JOURNAL_ID, the BUSINESS_UNIT, JOURNAL_DATE, and UNPOST_SEQ fields are also referenced; so these were included on this index in order of cardinality.
This index is frequently used: in the GLALEDGD, GLALEDGS and GLAJES SQRs as well as the Posting, Consolidations, Currency Translation, and Journal Edit background processes. Online, it is used on the inquiry and posting pages. This index should not require changes for ChartField configuration.
Note. Consider adding the JOURNAL_LINE and LEDGER to the end of the PSBJRNL_LN index and making it the unique index (eliminating the existing PS_JRNL_LN index). This is an option because the order of the index columns can differ from PeopleSoft Application Designer field sequence.
PSEJRNL_LN |
BUSINESS_UNIT |
CURRENCY_CD |
ACCOUNT |
PSEJRNL_LN (Oracle Only)
This index is comparable to the PSAJRNL_LN index, except that the sequence is different. Consistent with the cost-based optimizer approach, the more common fields (Business Unit and Currency Code) are included at the beginning of the index. The SQR processes of Trial Balance, General Ledger Activity, and OpenItem Status use this index.
This section discusses how to optimize indexes.
Once the indexes are created, you must tell the system to use these new indexes by “updating statistics.” The specifics vary by platform (described in the following chart). After new indexes are created, or after inserting or deleting substantial rows of data from existing tables, you should run the update statistics procedure. The purpose of this procedure is to update the system tables with index and table information to assist the optimizer in choosing the most efficient index for an operation. If this procedure is not performed frequently, your system performance could suffer.
Platform |
Method to Update Statistics |
DB2 |
Batch RUNSTATS process, by Table Space. |
ORACLE |
ANALYZE TABLE xxx COMPUTE STATISTICS ANALYZE INDEX xxx COMPUTE STATISTICS For large tables, it is faster to replace COMPUTE with ESTIMATE. Determine through testing whether estimated statistics yield optimum access plans. |
Indexes are delivered for several temporary tables. These include tables such as LEDGER_TMP, which hold data only for the duration of a specific process. Because the tables are generally empty, running update statistics on the empty version of the table causes the optimizer to think that the table is always empty and to favor a full-table scan instead of using the index. For this reason, a script is available to seed these temporary tables with 100 rows of data for the purpose of updating the statistics on the index.
The script is delivered in your SQL subdirectory and is called SEEDGL with the SQL extension appropriate to your database platform. The script inserts 101 rows of data into the temporary tables and runs the update statistics command on the seeded table. The tables that are seeded are the and PSTREESELECT05, PSREESELECT06, PSTREESELECT08, and PSTREESELECT10 to correspond to the length of the ChartFields delivered with the demo system.
If rows currently exist in your PSTREESELECTxx tables, you should not delete this data. The system populates these rows when you execute a PS/nVision report. These rows correspond to a control table named PSTREESELCTL, and if removed by them, result in incorrect data or no data in your PS/nVision report the next time you execute it.
Review each script before running to ensure that the key values loaded do not conflict with any that would be used by the existing system, and to determine if changes are needed for any specific modifications you might have done. When running the scripts in your production environment, be sure that you seed the PSTREESELECT tables that correspond to the field length of your ChartFields; these are the PSTREESELECT tables that are used in your environment.
Note. If you are a DB2 customer, you can maximize the benefits of seeding these tables, by seeding them with the cardinality that
is correct for your particular environment. A script named SEEDMULT.DMS is an example of seeding the PSTREESELECT06 table
with correct cardinality. Use this version, rather than the above versions, when seeding the PSTREESELECT tables.
Table seeding is also needed for temp tables that are being reserved for Application Engine processes called from PeopleCode.
The AE processes have a logic to update statistics of these temporary tables after each Insert. But each %UpdateStats requires
an explicit commit, and since the commit is not allowed within an Application Engine program called from PeopleCode, therefore
the %UpdateStats is not executed. To ensure that Sql statements referring to those temporary tables run well, we can try 2
things:
1– To leave the temporary tables with no statistics. That means never do any update statistics on these tables. If you had
updated statistics of these tables when they were empty, you need to drop and recreate them. For Oracle, you can run the analyze
table command with the option to delete the statistics. You then need to test the process to see if the performance is acceptable
for you.
If having no statistics on the temporary table does not yield the performance required (like the case of table PS_COMB_EXP_TAOx
of Journal Combination Edit process), then you need to seed the table by running an Application Engine trace of level 131,
extract all the “insert into PS_COMB_EXP_TAOx” Sql statements, and run them outside from the program to seed the table. Then
run the update statistics command on the seeded table.
Physical Performance Considerations
Even the best index planning and execution cannot overcome performance problems caused by disk and index fragmentation. Fragmentation happens over time as records are deleted, updated, and added to the database. When new records are inserted, the system places the new information wherever space is available—not necessarily in the same location as the rest of the physical table. As records are deleted, gaps might be left in the physical space that the record occupied which can or cannot be filled in with new information. As the physical locations of these records become more spread out, the system must work harder to find the specific record you requested, and response time suffers. Both indexes and tables can become fragmented and hamper performance, so it is important to take the steps outlined in your database administration documentation to eliminate database fragmentation.
When you have a large volume of transactions to process in a limited run-time, use partition IDs to enable the journal posting processes (GLPPPOST) to run in parallel. Partition IDs enable you to group your transaction data into mutually exclusive sets, ending contention between processes for the same row of data during posting. This eliminates possible delays due to halted processes and database rollbacks.
Only after your system has been in production for several months is it practical to implement partition IDs. It requires reevaluation at regular periods to ensure your system stays within the limits of your batch run-time window.
It is crucial that you understand your data structure before attempting to use partition IDs. Analyze the volume of your transactions by business unit and ChartField, and divide them into transaction groups with roughly equal numbers of transactions. Using your analysis, create a business unit/ChartField combination scheme to classify the transactions into mutually exclusive data sets. Then, create a partition ID for each a business unit/ChartField combination.
You might want to monitor the duration of batch processing to ensure your partition IDs remain effective over time. This can provide advance warning of changes in your system’s batch processing requirements, as well as changes in your business.
To use partition IDs, first, define them using the Partition ChartField Entry Process Partition page. Then, specify your partition IDs in the request pages for the Journal Edit (GL_JEDIT) and Journal Post (GLPPPOST) processes.
See Also
The ADB Calculation process (GL_ADB_CALCX) supports these optimizing features:
Incremental calculations to compute the average daily balance.
This method leverages from prior period aggregate and ending balances to calculate the requested period average. All adjustments are automatically applied to the average balances before calculation of the requested period averages.
Non-shared tables to process temporary data, including the tree selector tables.
Archiving ledger data.
This includes the ADB ledger and the ADB target ledger which holds the calculated averages.
Other ways that the ADB Calculation process supports system optimization is that it enables customers to:
Filter transactions posted the ADB ledger to control the volume of data.
Partition the calculated averages to different target ledgers.
The process also enables the customer to specify a calendar ID to the target ledger. For example, customers can specify a monthly calendar to store the MTD balances.
See Also
Understanding Average Balance Calculation
Summary ledgers can be updated incrementally in either of these ways:
From Process schedule run option Increment.
During posting to the detail ledger, thus keeping summary ledger synchronized with its detail ledger.
The objects and fields needed to support this feature are the following:
The Summary Ledger Stage Tbl (table).
This table is defined on the Ledger Template page. It is a copy of posted detail ledger transactions.
The Status table (accessed from the Summary Ledger Status page).
This table contains a row for each business unit, summary ledger, fiscal year and accounting period (summary). The Status table also includes a date time stamp with the date and time when the system writes the row into the status table.
The Ledger Tmp (a summary ledger temporary table).
This table is defined on the Ledger Template page. It is a copy of the summary ledger table and is used when you specify incremental updating.
The Enable Incremental Sum Ledger option on the Ledgers for a Unit Journal Post Options page.
This option indicates whether rows for business unit/detail ledger are staged.
If this option is not selected, the staging process is bypassed. When staging is bypassed, no summary ledger for this business unit/detail ledger can be incrementally updated.
The Skip Summary Ledger Update option.
This option is located on both the Journal Post Request page and the User Preferences General Ledger page. It specifies whether to update summary ledgers during a specific run of a posting process.
The Post to Summary Ledger option on the Ledger Set page.
This option enables a specific business unit/summary ledger to be incremented from posting.
The incremental summary ledger update process flow is as follows:
Summary Ledger process (initial summary ledger creation ):
On the Summary Ledger process request page, if the Request Type is Create, the process creates initial summary ledger data for the given business unit/summary ledger/accounting periods.
The process inserts a row into the Status table for each business unit/summary ledger/accounting period processed.
The Journal Post process:
If the Enable Incremental Summary Ledger option is selected on the Ledger for a Unit - Posting Options page, the process inserts rows into the Staging table with data posted to the detail ledger.
If the Post to Summary Ledger option on the Ledger Set page is selected, the process incrementally updates the summary ledger from the Staging table. (See step 3 for details.)
Summary Ledger Process (incremental update ):
For a given business unit and detail ledger, the process determines the summary ledgers and accounting periods to process based on the ledger set, the Post to Summary Ledger option, and the Status table.
Only combinations with status entries and with the Post to Summary Ledger option selected are processed.
If the ledger has not yet been created, the process performs the creation directly from the ledger tables. Otherwise, it updates the ledger incrementally from the staging table.
For each business unit/summary ledger/accounting period processed, the process inserts a row in the Status table.
The process cleans up the Staging table by deleting rows with dttm_stamp earlier than the minute stamp of related status rows.
Note. If the incremental update is initiated from the Summary Ledger Process Request page, the Request Type must be Increment.
See Also
Combining Accounts Using Summary Ledgers
This section discusses:
Tree joins.
Combination rules.
Capturing SQL.
PS/nVision is a complex tool, capable of producing a great variety of reports from a variety of database tables. The SQL statements it generates are not necessarily complex, but they are very sensitive to the performance of the underlying database, especially in the following areas:
Large tables (ledgers often have millions of rows) make efficient use of indexes essential.
The use of trees and reporting (security) views cause multiple tables to be joined.
The efficiency with which the database processes these JOIN statements dictates most of the performance of PS/nVision.
Unlike traditional background reporting tools, PS/nVision supports interactive, focused reporting with a probing or querying approach to accessing the database. PS/nVision queries tend to be more numerous than traditional report writers are, but also more focused on the specific data that you want to see.
PS/nVision relates tree node criteria to data tables by joining the data table to a tree selector table. This selector table contains a row for every detail range defined for the tree in the Tree Manager, and is keyed by PROCESS_INSTANCE (a system-generated constant number for all the rows representing a particular tree) and tree node number. Because some database platforms only join tables efficiently if the field sizes match, the system uses up to 30 selector tables, one for each supported ChartField length. Each selector table has RANGE_FROM_nn and RANGE_TO_nn columns matching the corresponding ChartField size.
The following code is a typical SELECT for selection via nodes on a single tree:
SELECT L.TREE_NODE_NUM, SUM(POSTED_TOTAL_AMT) FROM PS_LEDGER A, PSTREESELECT06 L WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=1991 AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 9 AND A.ACCOUNT>=L.RANGE_FROM_06 AND A.ACCOUNT<=L.RANGE_TO_06 AND L.PROCESS_INSTANCE=198 AND (L.TREE_NODE_NUM BETWEEN 16 AND 30 OR L.TREE_NODE_NUM BETWEEN 35 AND 40) GROUP BY TREE_NODE_NUM
The parts of this statement in boldface accomplish the tree criteria selection. The GROUP BY clause returns an answer row for each node that has a detail range attached to it; these node numbers are used to post amounts from the answer set into the appropriate rows of the report.
PS/nVision endeavors to retrieve the data for each report instance with as few SELECTs as possible. It examines all row criteria to determine which can be combined, and does the same for column criteria. It then builds a Select statement to retrieve each intersection of a combined group of rows with a combined group of columns. You should understand the following built-in rules when designing indexes:
Different ledgers cannot be combined.
Different TimeSpans cannot be combined.
nPloded rows or columns cannot be combined with non-nPloded rows or columns.
To be combined, two or more rows or columns must have criteria for the same set of ChartFields, and each ChartField's criteria must be of the same type (selected tree nodes cannot be combined with selected detail values).
If criteria for a ChartField are specified by tree node, they can only be combined if they use the same tree.
If the combined rows or columns have identical criteria for a particular ChartField, the criteria are included in the “where” clause but no “group by” on that field is required.
If different rows/columns in the group have different criteria, PS/nVision adds this field (or the corresponding tree node number) to the “group by” clause to retrieve a value for use in posting the answer set to the report.
A single Select statement can retrieve amounts for multiple combined rows and columns.
Different scope instances are retrieved with separate Select statements.
To examine the SQL produced by PS/nVision, capture the statements in one of two ways:
Use the Options Trace option on the Excel menu.
This causes PS/nVision to display each Select statement used for retrieving labels or amounts in a dialog. Select the text with the mouse, copy it to the clipboard, and paste the text into another application such as Notepad or a text editor. Then save the text to a file or work with it within the application.
Note. If you want to capture the SQL but do not want to wait for it to execute, select the Excel Options Simulated Run option. PS/nVision generates all the SQL, but will not execute SELECTs for amounts.
Select the PeopleTools SQL trace through the Utilities menu.
This causes all SQL statements executed by PeopleTools to be written to a file called ~DBG0001.TMP in the Windows TEMP directory (often C:\TEMP). This trace shows timings, but does not include SQL that was not executed due to the Simulated Run option.