SAP BASIS (Business Application Software Integration System) Welcome to my World !! This blog is for technical stuff related to SAP Basis !! If you have any SAP Basis querry or anything that you would like to share post me at deep.kwatra@gmail.com with your name . ThanQ
SAP Video Zone - Must see
http://www.youtube.com/watch?v=-2sZhOivP2s
http://www.youtube.com/watch?v=bEwntYAcvmU
http://www.youtube.com/watch?v=SORBkfPKT8U
http://www.youtube.com/watch?v=uU_VNFIFD9I
http://www.youtube.com/watch?v=yGTRQJf5kec
http://www.youtube.com/watch?v=u93ZnKcL0AQ
http://www.youtube.com/watch?v=bEwntYAcvmU
Evolution Of SAP
http://www.youtube.com/watch?v=aovNXEi7f-w
SAP Modules
http://www.youtube.com/watch?v=v0knXytlA7I
SAP Basis Exam Simulator " Max 8 questions "
http://www.klausutech.com/fun/sap_exam.htm
Sap Installation
http://www.youtube.com/watch?v=5GXAFjy_Q-E
SAP Training : Guide to Free SAP Study Material
http://www.youtube.com/watch?v=6GQeHnbT1k4&feature=related
How to use Vlookup in MSExcel
http://www.youtube.com/watch?v=2wHtcct7mCE
Interview Questions
Four types of transport requests
1. customizing request
2. Workbench request
3. transport of copies
4. relocation
what is the full name of SAP Default user DDIC.
DDIC stands for DATA (D) DICTIONARY(DIC)- DDIC
What is the procedure to lock a client?
There is no direct tcode to lock a client. the easiest way to lock a client is
1. run tcode SE37
2. type function module name - SCCR_LOCK_CLIENT
3. enter the Client No.
4. execute the function module.
Where to check for system logs of Sap application at os level?
The system Logs of SAP Application at OS Level can be
checked at SAPMMC -> SAP Systems -> SID -> Syslog.
How can i check the user login details activity in a month.
Through SM20 we can check the user login details.
How to transport users from one client to another?
1 Goto T-code SCC1
2.Choose the Source client from where the Users to be
transferred to other client
3. Chose the SAP_USR profile
4. Transport request NO. is generated
5. Release the Transport request and export
6. Import the transport request into target system
What is significance of Virtual system ?
The significance of Virtual system is that . Without VR all the request generated buy user will be local and local request cannot be impoted / tranported to other system .
so always create a VR system so that all the developemnt can be transported to the onther system . wnen u will create VR system u can c trasport at OS level cofile /data file .
What is the relevance of the Deletion Flag/Indicator in the archiving process?
Deletion Flag is for running the delete program. The
sequence for archiving is:
1 Data declaration component
2 Customizing settings
3 Programs
1) Write
2) Delete
3) Read
After write program is executed (Where we select write
indicator) we schedule/execute Delete program by choosing
this indicator
In-short, this indicator signify the program which has to run.
How to install multiple Central Instances on the same physical machine?
Create a separate the filesystem and SAP mount points for each CI under the folder
What is supplementation language?
Use the language supplementation function to fill in the gaps in a language that has not been translated completely.Supplementation actions are client-specific. The languages are supplemented in the client in which you are logged on.If you use multiple clients, you must supplement the languages explicitly in each production client.You can also access the texts stored in cross-client database tables from all clients simultaneously. The default setting specifies that cross-client tables are supplemented when you are logged on to client 000.
What SAP tools you use to install SAP patches?
SPAM is the sap tool used to install sap patches
But if patches is less then 10MB at that time you will Run transaction SPAM , if patches greater then 10 MB at that time you hav to UN car by using SAPCAR File name and after UNCAR put the file in /user/trans/directory.
What are common transport errors?
Return code (4) indicates imported ended with warning. Ex:
1. Generation of programs and screens
2. Columns missing and Rows missing.
Return code (8) indicates not imported ended with error Ex:
1. Syntax error.
2. Program generation error.
3. Dictionary activation error.
4. Method execution error.
Return code (12) indicates import is cancelled. Ex:
1. Import is cancelled due to object missing.
2. Objects are not active.
3. Program terminated due to job ?RDDEXECL? is not
working.
Return code (16) indicates import is cancelled. Ex:
1. Import cancelled due to system down while importing.
2. Import cancelled due to user expires while
importing
3. Import cancelled due to insufficient roles.
What is the importance of the clients 000,001 and 006?
000 is also called as Master Client or Golden client. It contains cross client data & Companies Hardware configuration, Patches, Add ons Plug ins Etc
001 is the copy of the Master Client.
066 is called as Early Watch Client. If there is any problem with the SAP system then the early watch client throws the alerts.
What are the .sca files and their importance?
sca stands for SAP Component Archive.
or .sda for SAP deployed Archive
These two are same and it use to deploy the java components,patches and other java developments in the form of .sca,.sda, .war and .jar.
What is SAPS?
The SAP Application Performance Standard (SAPS) is a hardware-independent unit that describes the performance of a system configuration in the SAP environment.
SAP standard application workbench which is used to measure CPU. like a standard CPU of 1.6 ghz produces 800 to
1000 SAPS.
How to schedule background jobs at OS level?
Invoke a sapevent using a OS script and then have
a SAP background job set to run on a sapevent. I say this assuming Unix and a job in crontab. I would guess the same thing could be done on a Windows system.
sapevt TRIGGER_NAME -t
pf=d:usrsapDEVsysprofileDEV_DVEBMGS00_SVRNAME nr
How to increase tables space, resizing, backups and in what situation are these done?
Brtools are use to prefome all the database related task in sap .
always use
How to define Logon groups? And what is Logon load balancing?
Logon group are set using SMLG transaction.
Load blancing:
During the request message server check for the least loaded instance in the group and route the request to that instance.
If instead of logon group we specify the instance then the request is routed to that instance only. Means no load balancing occur in this case.
What is the difference between Synchronous and asynchronous transports?
Dialog or batch process is blocked until import has ended in synchronous transport.
Dialog or batch process is released after import is started in asynchronous transport.
What is consolidation & delivery route ?
The route between development to quality is called
consolidation route.
The route between quality to production is called delivery route.
It is used to transport data dev-->qua-->prod.
How to know whether a system is Unicode or non Unicode?
Through sm51 t-code we can see whether it is unicode or not.
in sm51 t-doe we can find the release notes button in the application tool bar if u click on that u can see the total information like database, os, kernal version and uniicode or non unicode.
How can we creat a Z authorization object and what the procedure and the T-code for the same?
Tcodes for Creating z authorisation onject - su20,su21.
su20 - create the authorisation field list.
su21 - Create a object class,include the added
authorisation object.select the tcode assignment and assign to the needed tcode.
Go to the User profile and add the object manually give the authorisation .
How can we creat a Z authorization object and what the procedure and the T-code for the same?
No, We cant do this because there will be some programs and tables which will get updated when applying support packages.
if some user is using that program or table then the support package manager will not be able to update and it will terminate with dump. So its better to apply support packs when there are no users login into system. Performance will
also be better if there are no users login into system.
What is supplementation language?
The default languages for newly installed system are german and English. SAP support many other languages. Any other language except English/German may not fully traslated. TO
fill this gap suppliment language are installed on the SAP system.
Supplementry Language:
It is the additional language (program) installed on the SAP system when aditional language which is not fully translated to the basic language of SAP (English/German)
What is the difference Between Role and Profile ?
Profiles are the component of the older SAP releases, this was not replaced in the newer versions but a new layer was placed above Profiles, Roles.
Profiles are therefore a subcomponent of Roles, they have a one to many relationship where when authorization objects overflow one profiles limit it will create a second profile for that Role and so on.
Roles are logically assigned to users to give the newer functionality of Roles (Menus etc), but profiles are assigned to give the authorizations.
What are homogeneous copy and heterogeneous copy and how you will do that? how to impot the OSS notes? What is OCS how to apply OCS patches? ABAP service pack level can found in SPAM, but how to find the java stack level?Homogeneous copy is done when the source and target system are on the same OS and Databse.
Heterogeneous Copy is done when the source and target system differ either in OS or Database. Any ONE difference needs a heterogeneous copy.
Homogenous copy is done by export/import technique.
Heterogenous copy is done by system migration. It is same as export/import except that it will ask for target OS and DB type and needs a key to be entered.
OSS notes are applied using T-code SNOTE.
Java stack level can be found at
http://hostname.domain.com/5
goto system info to find the java stack level.
Could you explain the transport steps procedure?
Go with t-code 'stms' then 'tranport overview'
then creat system either virtual or external for the quality system and production system, if developement is showing there then creat only qua & prod but if developement is not there, then go with client '000' and login with sap*, then 'stms' and creat from here all the three, and do according to the above procedure and after
making all three go through 'transport routs' then
configuration, & then three system in a group, now give the name of three group, and save it and activate it throughout the system.now transport management system is configured.
How you will do client copy? If SAP * user is not available at all in your system then how u will do client copy?
Just goto RZ04 and increase the number of background processes in the specified operation mode.We do a system copy using SCCL transaction.
If SAP* is not available, login at OS level as SIDADM and sqlplus /nolog connect /as sysdba;
delete sapsr3.usr02 where mandt='
bname='SAP*';
goto /usr/sap/SID/SYS/profile and change the default profile.
Add the parameter login/no_automatic_user_sapstar=0
0R
goto RZ10,select default profile and extended maintenance and change.
add the parameter ogin/no_automatic_user_sapstar=0
What is Domain controller & transport domain ?
Domain controller :
it is the central admin of the system
transport domain :
it is the place the transport layer and routes can be configured to access this transaction use stms
What is the significance of ODS in BIW?
An ODS Object serves to store consolidated and debugged transaction data on a document level.It describes a consolidated dataset from one or more InfoSources. This dataset can be analyzed with a BEx Query or InfoSet Query.
The data of an ODS Object can be updated with a delta update into InfoCubes and/or other ODS Objects in the same system or across systems. In contrast to multi-dimensional data storage with InfoCubes, the data in ODS Objects is
stored in transparent, flat database tables.
What is Bex?
BEx (Business Explorer) is the reporting tool used to work with data in the BW database. BEx has a Web-based user interface and is made up of two components, the BEx browser and the BEx analyzer.
The BEx browser provides an organized interface where a user can access and work with any type of document assigned to them in the Business Information Warehouse, such as workbooks, links, and BW Web reports. The BW database itself is segmented into discrete data areas called
InfoCubes that are made up of data and associated metadata. The BEx analyzer allows the user to examine segmented data in a variety of useful combinations, for example when comparing financial data for different fiscal years.
Name some drawbacks of SAP?
There are many benefits to implementing an integrated solution such as SAP.
Commercial benefits would include having a single source for your financial information. Capturing your business transactions in one location allows you to easily review inventory, customer and vendor activity.
On the technical side, a solution on a single platform will enable easier maintenance and support, reducing costs.
Having a consolidated system means fewer interfaces to support. By having a single system of record your human resources will become familiar with terminology associated
with this data and the standard processes. This may improve communication and create a work force that is easier to transfer between roles.
Any drawbacks to such a solution would depend on the amount of restrictions you choose to place on your environment.
New business solutions may have to fit within the current system, technology must be compatible and human resources must adapt to handle data in certain standard processes.
What is WF and its importance?
Business Work Flow: Tool for automatic control and
execution of cross-application processes. This involves
coordinating the persons involved, the work steps required,
the data, which needs to be processed (business objects).
The main advantage is reduction in throughput times and the
costs involved in managing business processes. Transparency
and quality are enhanced by its use.
Important Websites
http://www.sapdb.info/wp-content/uploads/2009/04/java-addin-installation.pdf
IBM DB2 UDB versus Oracle backup and recovery http://www.ibm.com/developerworks/data/library/techarticle/dm-0407tham/index.html
SAP How to Step by Step Guide with Screen Shot
http://saphowto.wordpress.com/page/2/
Oracle:- Administering Databases and Datafiles
The database-creation operation is split into the tasks of the DBA and the tasks of the end user or application developer. These tasks are split based on what level they access the DBMS.
The Oracle DBA is responsible for all tasks that relate to the DBMS at the low level. Operations that involve the management of datafiles, redo log files, control files, tablespaces, extents, and segments are the responsibility of the DBA, as are the tasks of creating the tables, indexes, clusters, and views (with certain installations, some of these tasks might be performed by or in conjunction with the application development team). In any case, these responsibilities are addressed separately.
Tasks Involved in Creating a Database
Creating a database involves one Oracle DDL statement, and perhaps weeks or months of preparation to be ready for that one step. To create a database, you must know a lot about the data that will be put into the database, the data-access patterns, and the database's volume of activity. All these factors are used to determine the layout of the datafiles and redo log files. These are the responsibility of the Oracle DBA.
Under Windows NT, you must create the instance before you create the database. Because Oracle runs as a service under NT, the instance is necessary for the database-creation phase. An instance can be created, modified, or deleted through the NT Instance Manager. This should not be confused with the Enterprise Manager instance-management tool. Procedures on how to create this bootstrap instance were covered yesterday.
Creating the database actually occurs in two separate--but related--steps. The first step involves the actual database-creation command. This command creates the redo log files, the control files, and the datafiles necessary to create the SYSTEM tablespace. The SYSTEM tablespace contains the SYSTEM rollback segment, the data dictionary, stored procedures, and other structures necessary to run the Oracle instance.
The second phase involves adding tablespaces, tables, indexes, and so on that are used to store your specific data. The first phase described here is covered today; the remaining tasks necessary to finish creating your database will be described tomorrow. It is only when these additional tablespaces are added and your tables are created that your database is complete.
It is important that the DBA and end user work together in defining the database, because the physical layout and the actual data should be configured in an optimal manner. If you underconfigure the hardware or create a poor database layout, you will see a severe degradation in performance.
Tasks of the DBA
The DBA is responsible for all the low-level formatting of the database. I refer to this as formatting because that is basically what these steps do. When you format a disk, it is checked and zeroed out; likewise, when you create a tablespace and datafile, Oracle essentially checks out the disk space and lays down its internal format on the physical disk.
The DBA is responsible for creating the database, adding datafiles, and managing the control files and redo log files necessary for the proper function of the Oracle RDBMS. The DBA is also responsible for allocating these resources to the end user so that he or she can properly use them. The DBA or developer must then build tables, indexes, and clusters on these tablespaces. After the tables have been built and loaded, the user can then access this data.
Tasks of the User or Developer
It is the responsibility of the developer to relay to the DBA what the structure of the data should be and how it will be accessed. In this way, the DBA can have all of the information necessary to properly lay out the database. It is the responsibility of both the DBA and the application developer to work together to provide a stable and usable environment for the end user.
Designing the Database
Designing the database can be quite complex and time consuming, but well worth the effort. Any mistakes at this point can be very costly in terms of performance and stability of the system in the long run. A well-designed system takes into account the following factors:
Performance--The database that has been designed for performance from the very beginning will outperform any system that has not. Many critical performance items can only be configured in the design stage, as you will soon see.
Backup--Often, the DBA is given only a short time to accomplish the required backup operations. By planning the data layout at the very beginning with this criterion in mind, these operations can more easily be accomplished.
Recovery--Nobody plans for his system to crash, but it is an unfortunate fact of life that hardware and software components sometimes fail. Planning can facilitate the recovery process and can sometimes be the difference between recovering and not recovering.
Function--The database layout has to take into account its ultimate function. Depending on what type of applications are being run and what the data looks like, there might be different design considerations.
Physical Database Layout
As part of the design considerations mentioned previously, the physical layout of the database is very important. You should consider several factors when designing the physical layout, including
Database size--You must be able to support the amount of data you will be loading into the database.
Performance--A physical disk drive can support only a certain number of I/Os before performance begins to suffer.
Function--You might decide to lay out tablespaces based on their function. This allows different departments to have different backup schedules, and so on.
Data protection--It is very important that some types of files be protected against media failure. Primarily, the redo log files and the archive log files need to be protected.
Partitioning--Depending on what type and amount of partitioning you will be doing, the physical layout might vary.
So you gain a complete understanding of how and why the physical database design might vary based on function, let's review a few basic factors.
Database Size
The size of the database is a key factor in how the physical layout is designed. For very small databases, this might not be much of an issue, but for very large databases it can be a major issue. You must make sure that you have not only enough space for the datafiles themselves, but also for associated indexes. In some cases, you might need to have a large temporary area to copy input files to before they are loaded into the database. Oracle has a few restrictions on the size of the components of the database:
The maximum size of a datafile is 32GB (gigabytes).
The maximum number of datafiles per tablespace is 1,022.
The maximum size of a tablespace is 32TB (terabytes).
As you can see, Oracle allows you to create and maintain very large databases. You might think this is an incredible size for a database and no system will ever achieve this size. Well, I can remember when a 10MB disk drive was huge for a PC. If industry trends continue the way they've been going, I would not be surprised to see systems with 32TB tablespaces in the near future.
As you saw on Day 4, "Properly Sizing Your Database and Planning for Growth," it is not only necessary to build your system with today's requirements in mind, but also to plan for the future. Systems can increase in size at incredible rates, and you must be ready for it.
Performance
An important factor to remember when designing the physical layout of your database is the performance of the various components in the system. The load on the system caused by numerous users requesting data will generate a certain amount of disk I/O.
The disk drives that comprise the system can service only so many I/Os per second before the service time (the time it takes for an I/O to complete) starts increasing. In fact, it is recommended that for a standard 7200 RPM SCSI disk drive, you run it at only the following rates:
Random I/O--60-70 I/Os per second per disk drive.
Sequential I/O--100 I/Os per second per disk drive.
NOTE: With a sequential I/O, the data that is requested is either on the same track as the last data accessed or on an adjacent track.
With a random I/O, the data that is requested is on another track on the disk drive, which requires the disk arm to move, thus causing a seek. This track seek takes much more time to complete than the actual reading of the data.
Taking these factors into account, you should isolate the sequentially accessed data and spread out the randomly accessed data as much as possible. A hardware or software disk array is a good way to spread out these randomly accessed I/Os. By determining the amount of I/O traffic that will be generated, you can decide how many disk drives are required. A lack of disk drives can cause severe performance problems. In many cases, you will find that you are required to use many more disk drives for performance reasons than you would for size requirements.
TIP: The redo log files are sequentially accessed, as are the archive log files. These files should be isolated from randomly accessed files in order to increase performance.
Function
You might also find that you want to separate your database into different tablespaces based on function. That way, maintenance operations and backups can be done on a per-department basis. For example, you can put accounting and sales on different tablespaces so they can be backed up separately.
You will also find that different types of operations have different characteristics. For example, an OLTP system that has a large number of updates is very sensitive to the placement of the redo logs due to performance considerations. This type of system might also be continuously creating archive log files that need to be protected and backed up. This requires some planning.
On the other hand, a decision support system (DSS) that primarily queries might not need a high-performance redo log volume, and archiving might occur only once per day. In that case, you might want to design your database layout to favor the datafiles.
Data Protection
The primary job of the DBA is to protect the data in the system. As part of this job, you the DBA must determine how to protect that data. As you saw on Day 2, "Exploring the Oracle Architecture," every change that Oracle makes to the database is written to the redo log files and, in turn, these redo log files are archived. These redo log files and archive log files can be used, in conjunction with a recent backup, to recover the database to the point of system failure. This is, of course, assuming that the redo log files and archive log files are intact.
It is therefore necessary to protect the redo log files and archive log files from media failure. This can be accomplished either via hardware or software fault tolerance. I prefer hardware fault tolerance in the form of a RAID (redundant array of inexpensive disks) subsystem, but software fault tolerance is also very good.
There are several options available with RAID controllers; the most popular are RAID-1 and RAID-5. Each has advantages and disadvantages, as shown here:
RAID-1--Also known as mirroring. The entire contents of a disk drive are duplicated on another disk drive. This is the fastest fault-tolerant method and offers the most protection. It is, however, the most costly because you must double your disk-space requirements.
RAID-5--Also known as data guarding. In this method of fault tolerance, a distributed parity is written across all the disk drives. The system can survive the failure of one disk drive. RAID-5 is very fast for reading, but write performance is degraded. RAID-5 is typically too slow for the redo log files, which need fast write access. RAID-5 can be acceptable for datafiles and possibly for the archive log files.
TIP: It is a good idea to put your operating system and redo log files on separate RAID-1 volumes. This provides the best level of protection and performance.
Typically, the archive log files can reside on a RAID-5 volume because performance is not critical. If you find that you are having trouble keeping up on the archive log writes, you might need to move them to RAID-1.
Your datafiles can reside on a non-fault-tolerant disk volume if you are limited on budget and can afford to have your system down in the event of a disk failure. As long as you have a good backup, you lose no data.
Partitioning
You might also decide to adjust the physical layout of your database based on the partitioning method you have chosen. Oracle has introduced a new partitioning method with Oracle8. Various partitions can be allocated to Oracle tables based on ranges of data. Because the partitioning is actually done at the tablespace level and the tablespaces are made up of datafiles, it is important to plan your partitioning before you build your datafiles.
Because Oracle supports only range partitioning, whether you partition your data is dependent on your application and data. If you can take advantage of partitioning, you will definitely see some advantages in terms of reduced downtime and increased performance.
Creating the Instance
Before you can create the Oracle database under Windows NT or even start up the Oracle instance, you must create an instance. Follow the steps in the previous chapter to create the Oracle instance; start up the instance, and then you can create the database. Because Oracle functions as a service under NT, you cannot create a database without creating the instance.
Creating the Database
When you create a database, you are primarily creating the redo log files, the control files, and the SYSTEM tablespace. This SYSTEM tablespace is where important structures such as the data dictionary are kept. The data dictionary keeps track of all of the datafiles, the database schema, and all other pertinent database information. After you create this initial database, you will create more tablespaces and assign your schema to those tablespaces. So let's continue creating the initial database.
After the instance has been created, you can create the database. Creating the database is done either through Enterprise Manager or with the CREATE DATABASE DDL command. Although Enterprise Manager is quite convenient and easy to use, I prefer to script the creation procedure into a SQL file. By doing this, you can easily run this creation procedure over and over again and modify it for other purposes. This also provides you with a record of how this procedure was done.
Setup
There are a few initial setup steps that should be completed before you begin the actual creation process. These steps are designed to help you create the right configuration as well as to protect yourself from potential future problems. These steps involve the following:
1. Backing up any existing databases on the system
2. Creating the init.ora file
3. Starting up the Oracle instance
If you follow these steps, you should be ready to successfully create an Oracle database.
Let's look at these steps.
Backing up existing Databases
This is purely a precautionary step. It is always a good idea to back up all your databases on a regular basis. It is also recommended that you back up your databases prior to any major system changes, such as the creation of a new database.
No matter how careful you are in preparing for the database creation, there is always some danger in making major changes to the system. Because it is possible that a mistake could affect existing control files, redo log files, or datafiles, this precaution might save you quite a bit of work.
If some unforeseen event causes data loss in an existing database, the recovery process will be facilitated by having a fresh backup. This is just a precaution, and one that is well worth the time and effort.
Creating the init.ora File
It is necessary to create a new parameter file for each new database. The parameter file, also known as the init.ora file, contains important information concerning the structure of your database. All the Oracle tuning parameters are described in Appendix B, "Oracle Tuning Parameters," but a few parameters are critical to the creation of the database:
DB_NAME--This parameter specifies the name of the database. The DB_NAME parameter is a string of eight or fewer characters. This name is typically the same as your Oracle SID (system identifier). The default database was built with DB_NAME = oracle.
DB_DOMAIN--This parameter specifies the network domain where your server resides. This parameter, in conjunction with the DB_NAME parameter, is used to identify your database over the network. The default database was built with DB_DOMAIN = WORLD.
CONTROL_FILE--This parameter specifies one or more control files to be used for this database. It is a very good idea to specify multiple control files, in case of disk or other failures.
DB_BLOCK_SIZE--This parameter specifies the size of the Oracle data block. The data block is the smallest unit of space within the datafiles, or in memory. The DB_BLOCK_SIZE can make a difference in performance, depending on your application. The default size is 2,048 bytes, or 2KB. After the database is built, the block size cannot change.
DB_BLOCK_BUFFER--This parameter specifies the number of blocks to be allocated in memory for database caching. This is very important for performance. Too few buffers causes a low cache-hit rate; too many buffers can take up too much memory and cause paging. This parameter can be changed after the database has been built.
PROCESSES--This parameter specifies the maximum number of OS processes or threads that can be connected to Oracle. Remember that this must include five extra processes to account for the background processes.
ROLLBACK_SEGMENT--This parameter specifies a list of rollback segments that is acquired at instance startup. These segments are in addition to the system rollback segment. This should be set after you create the rollback segments for your database.
The following parameters should also be set, based on your licensing agreement with Oracle:
LICENSE_MAX_SESSIONS--This parameter specifies the maximum number of concurrent sessions that can connect into the Oracle instance.
LICENSE_SESSION_WARNING--This is similar to LICENSE_MAX_SESSIONS in that it relates to the maximum number of sessions that can be connected into the instance. After LICENSE_SESSION_WARNING sessions have connected into the instance, you can continue to connect more sessions until LICENSE_MAX_SESSIONS has been reached, but you will receive a warning from Oracle that you are reaching your limit.
LICENSE_MAX_USERS--This parameter specifies the maximum number of unique users that can be created in the database.
After these parameters are set, you can move on to the next phase: starting up the Oracle instance.
Starting Up the Oracle Instance with NOMOUNT
Before you start up the Oracle instance, check your SID. This will indicate which database you will connect to. You should typically set your SID to the same name as in the DB_NAME parameter. When your application connects into Oracle, it uses the SID to determine which database (if there is more than one) to connect to. Depending on the application and your network, the SID might be used to connect you to a particular database on a particular system via SQL*Net.
This is similar to starting up the instance as shown yesterday, except that to create a database, the instance be must be started with the NOMOUNT option (this is because no database associated with that instance is available to mount). After the SID has been checked, you can then start the Oracle instance. This can be accomplished in two ways: by using the Oracle Instance Manager or by using Server Manager. Both methods are presented here.
Starting the Instance with Server Manager
The way I prefer to build a database is by scripting it into a command file. That way, I will have a permanent record of what I have done to create the database. The first command in my script will be to start the Oracle instance in NOMOUNT mode as follows:
connect internal/oracle
startup [pfile=c:\orant\database\initORCL.ora] NOMOUNT;
NOTE: The brackets indicate an optional parameter. If the pfile parameter is not specified, c:\orant\database\initSID.ora will be used (where SID is the value of your SID environment variable).
By scripting, you can reuse this in the event you need to re-create the database or as a template for other database creations.
Creating the Database
After you have created the instance, you can move on to the next stage: creating the database itself. As with the instance, it is possible to create the database both from a graphical tool (in this case, the NT Instance Manager) or from the command line or script using the Oracle Server Manager. Here you will look at both methods. I prefer character-based creation because it can be scripted and thus re-used.
Creating the Database with Server Manager
To create the database with Server Manager, you must type it manually or, as I prefer, use a SQL script. The database is created with the CREATE DATABASE command.
The Syntax for CREATE DATABASE
The syntax for this command is as follows:
SYNTAX:
CREATE DATABASE [[database]
[CONTROLFILE REUSE]]
LOGFILE [GROUP group_number] logfile
[, [GROUP group_number] logfile] ...
[MAXLOGFILES number]
[MAXLOGMEMBERS number]
[MAXLOGHISTORY number]
[MAXDATAFILES number]
[MAXINSTANCES number]
[ARCHIVELOG or NOARCHIVELOG]
[EXCLUSIVE]
[CHARACTER SET charset]
[NATIONAL CHARACTER SET charset]
DATAFILE file_specification [AUTOEXTEND OFF ON
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;[NEXT number K M] [MAXSIZE UNLIMITED number K M]
[, DATAFILE file_specification [AUTOEXTEND OFF ON
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;[NEXT number K M] [MAXSIZE UNLIMITED number K M]]
The various parameters and variables are
database--The name of the database to be created. This is up to eight characters long.
CONTROLFILE REUSE--This optional parameter specifies that any existing control files be overwritten with this new information. Without this parameter, the CREATE DATABASE command would fail if the control files exist.
LOGFILE--This parameter is followed by the log-file name. This specifies the name of the redo log file. You can specify the log-file group with the optional GROUP parameter, or a log-file group number will be assigned automatically.
MAXLOGFILE--This parameter specifies a maximum number of log-file groups that can be created for this database.
MAXLOGMEMBER--This parameter specifies a maximum number of log-file members in a log-file group.
MAXLOGHISTORE--This is a parallel-server parameter that specifies a maximum number of archive log files to be used in recovery in a parallel-server environment.
MAXDATAFILE--This parameter specifies the maximum number of files that can be added to a database before the control file automatically expands.
MAXINSTANCES--This parameter specifies a maximum number of instances that the database can have open simultaneously.
ARCHIVELO--This parameter specifies that the database will be run in ARCHIVELOG mode. In ARCHIVELOG mode, a redo log group must be archived before it can be reused. ARCHIVELOG mode is necessary for recovery.
NOARCHIVELO--This parameter specifies that the database will be run in NOARCHIVELOG mode. In NOARCHIVELOG mode, the redo log groups are not archived. This is the default setting.
EXCLUSIVE--This parameter specifies that the database is mounted in EXCLUSIVE mode after it has been created. In EXCLUSIVE mode, only one instance can mount the database.
CHARACTER SET--This parameter specifies that the data in the database will be stored in the charset character set.
NATIONAL CHARACTER SET--This parameter specifies that the National Character Set used to store data in the NCHAR, NCLOB, and NVARCHAR2 columns will use the charset character set.
DATAFILE--This parameter specifies that the file identified by file_specification will be used as a datafile.
File specification is made up of the following:
`filename' SIZE number (K or M)--The file specification is used to define the name and the initial size in K (kilobytes) or M (megabytes) of the datafile.
[REUSE]--This parameter allows you to use the name of an existing file.
The following options are available to the DATAFILE parameter:
AUTOEXTEND OFF--Specifies that the autoextend feature is not enabled.
AUTOEXTEND ON--Specifies that the autoextend feature is enabled.
The following options are available to the AUTOEXTEND ON parameter:
NEXT--Specifies the number K (kilobytes) or M (megabytes) automatically added to the datafile each time it autoextends.
MAXSIZE UNLIMITED--Specifies that the maximum size of the extended datafile is unlimited. It continues to grow until it runs out of disk space or reaches the maximum file size.
MAXSIZEnumber (K or M)--Specifies that the maximum size that the datafile can autoextend to is number K (kilobytes) or M (megabytes).
The CREATE DATABASE command might seem to be quite complex, but it is not really that difficult. It is not necessary to use all the optional parameters, but as you gain experience, you might decide to use them. An example of creating a database is shown here:
CREATE DATABASE logs CONTROLFILE REUSE
LOGFILE
GROUP 1 ( `d:\database\log1a.dbf', `e:\database\log1b.dbf') SIZE 100K,
GROUP 2 ( `d:\database\log2a.dbf', `e:\database\log2b.dbf') SIZE 100K
DATAFILE `d:\database\data1.dbf' SIZE 10M,
`d:\database\data2.dbf' SIZE 10M AUTOEXTEND ON NEXT 10M MAXSIZE 50M;
It is not necessary to create all the datafiles at database-creation time. In fact, if you are creating a large number of datafiles, it is more efficient to create the datafiles in parallel using ALTER TABLESPACE ADD DATAFILE.
The CREATE DATABASE command serializes its operations. So if you specify two datafiles, the second will not be created and initialized until the first one has completed. The operation of adding datafiles can, however, be accomplished in parallel. This will reduce the time necessary to create the database.
Creating the Catalogs
After the database has been created, two scripts (CATALOG.SQL and CATPROC.SQL) should be run to create the data dictionary views. These views are important to the operation of the system as well as for the DBA. These catalog scripts can be run within the Server Manager by using the @ character to indicate that you are running a SQL script, as shown here:
@D:\ORANT\RDBMS80\ADMIN\CATALOG;
...
Much data returned
...
@D:\ORANT\RDBMS80\ADMIN\CATPROC;
...
Much data returned
...
You will see the SQL script text as it is running. This process is quite time consuming and will display a very large amount of data.
NOTE: Running the CATALOG.SQL and CATPROC.SQL scripts will take a significant amount of time; don't worry if it seems like it is taking forever.
CATALOG.SQL
The Oracle SQL script CATALOG.SQL creates many of the views used by the system and by the DBA. These include the V$ tables that are referenced throughout the book. Also created by this script are the DBA_, USER_, and SYS_ views. Synonyms are also created, and many grants are done by this script. All these views, synonyms, and permissions are very important to the operation of the system.
CATPROC.SQL
The CATPROC.SQL script is also extremely important to the function of the system. This script sets up the database for the procedural option. The CATPROC.SQL script runs many other SQL scripts, including ones that set up permissions, insert stored procedures into the system, and load a number of packages into the database.
If you run the CATALOG.SQL and CATPROC.SQL scripts, your system will be configured and ready to create tables and load the database. Nonetheless, there might be other options you want to set or parameters you want to alter. These can be accomplished through the use of the ALTER DATABASE command, as shown in the next section.
Modifying the Database
Many of the tasks involved in modifying the Oracle database, tablespaces, and datafiles can be done via the Oracle Enterprise Manager tools or through the use of DDL statements via the Oracle Server Manager. Both methods are described in this section. As you will see, the Oracle Enterprise Manager simplifies the task by providing you with choices, but is somewhat limited in functionality.
Modifying the Database with the ALTER DATABASE Command
Modifying the database from Server Manager is accomplished via the ALTER DATABASE command. This command is used to alter various parameters and specifications on the database itself, and can be typed into Server Manager or run as a SQL script. The syntax of the ALTER DATABASE command is as follows.
ALTER DATABASE [database]
[MOUNT [STANDBY DATABASE] [EXCLUSIVE PARALLEL]]
[CONVERT]
[OPEN [RESETLOGS NORESETLOGS]]
[ACTIVATE STANDBY DATABASE]
[ARCHIVELOG NOARCHIVELOG]
[RECOVER recover_parameters]
[ADD LOGFILE [THREAD number] [GROUP number] logfile
[, [GROUP number] logfile] ...]
[ADD LOGFILE MEMBER `filename' [REUSE]
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;[, `filename' [REUSE] ...][TO GROUP number] or
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;[`filename' [, `filename'] ...]
[, `filename' [REUSE] [, `filename' [REUSE] ...
[TO GROUP number] or [`filename' [, `filename'] ...]]
[DROP LOGFILE [GROUP number] or [`filename' [, `filename'] ...]
[, GROUP number] or [`filename' [, `filename'] ...]]
[DROP LOGFILE MEMBER `filename' [, `filename'] ...]
[CLEAR [UNARCHIVED] LOGFILE
[GROUP number] or [`filename' [, `filename'] ...]
[, GROUP number] or [`filename' [, `filename'] ...]
[UNRECOVERABLE DATAFILE]]
[RENAME FILE `filename' [, `filename'] ... TO `filename' [, `filename'] ...
[CREATE STANDBY CONTROLFILE AS `control_file_name' [REUSE]]
[BACKUP CONTROLFILE
[TO `filename' [REUSE]] or [TO TRACE [RESETLOGS or NORESETLOGS]]
[RENAME GLOBAL NAME TO database[.domain] ...]
[RESET COMPATABILITY]
[SET [DBLOW = value] or [DBHIGH = value] or [DBMAC ON or OFF]]
[ENABLE [PUBLIC] THREAD number]
[DISABLE THREAD number]
[CREATE DATAFILE `filename' [, `filename'] ...
AS filespec [, filespec] ...]
DATAFILE `filename' [, `filename'] ...
ONLINE or OFFLINE [DROP] or RESIZE number (K or M)
or AUTOEXTEND OFF or ON [NEXT number (K or M)] [MAXSIZE UNLIMITED or number
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">; (K or M)]
or END BACKUP]
The various parameters and variables for the ALTER DATABASE command are as follows:
database--This specifies the name of the database to be created and is a character string up to eight characters in length.
MOUNT--This parameter is used to mount an unmounted database.
The various options to the ALTER DATABASE database MOUNT command are as follows:
MOUNT STANDBY DATABASE--This is used to mount a standby database. The standby database will be described in detail on Days 16, "Understanding Effective Backup Techniques," and 17, "Recovering the Database."
MOUNT EXCLUSIVE--This is used to mount the database in EXCLUSIVE mode. EXCLUSIVE mode specifies that only one instance can mount the database. This is the default mode for the ALTER DATABASE MOUNT command.
MOUNT PARALLEL--This is used to mount the database in PARALLEL mode. PARALLEL mode allows other instances to mount the database in a parallel-server environment.
Other optional parameters to the ALTER DATABASE command are
CONVERT --This option is used to convert an Oracle7 data dictionary to the Oracle8 data dictionary.
OPEN--This parameter opens the database for normal use. Optionally, you can specify the additional parameter RESETLOGS or NORESETLOGS.
The options to the ALTER DATABASE database OPEN command are as follows:
OPEN RESETLOGS--With the RESETLOG parameter set, the redo logs are essentially reset to sequence number 1. This basically discards all information in the redo logs, thus starting over. The RESETLOGS command is required after an incomplete recovery done with the RECOVER UNTIL option of media recovery or after a backup control file. A backup should be taken immediately after an ALTER DATABASE RESETLOGS command. This is described in more detail on Days 16 and 17.
OPEN NORESETLOGS--This is the default operation of the ALTER DATABASE OPEN command, specifying not to reset the redo logs.
Other optional parameters to the ALTER DATABASE command are
ACTIVATE STANDBY DATABASE --This parameter is used to make a standby database into the current active database. The standby database is described in detail on Days 16 and 17.
ARCHIVELO --This specifies that this database is running in ARCHIVELOG mode. In ARCHIVELOG mode, each redo log group is archived to an archive log file before it can be reused. ARCHIVELOG mode is essential for data recovery in the event of media failure.
NOARCHIVELO--This specifies that the database is not running in ARCHIVELOG mode. Running in NOARCHIVELOG mode is very dangerous because media recovery might not be possible. See Days 16 and 17 for more details.
RECOVER--The recovery parameters are shown immediately after this section.
ADD LOGFILE logfile'--This parameter is used to specify the addition of log files named `logfile' to the database. By specifying the THREAD option, you can add this log file to a specific parallel server thread; omitting the THREAD parameter will cause the redo log group to be added to your current instance. You can also specify the value of the GROUP parameter. If you omit the GROUP value, one is assigned automatically. You can specify one or more log-file groups with this parameter.
ADD LOGFILE MEMBER filename'--This parameter adds members named `filename' to existing log-file groups. The optional parameter REUSE must be included if the file `filename' already exists. You specify the group that you are adding to in one of several different ways.
The various options to the ALTER DATABASE database ADD LOGFILE MEMBER command are as follows:
TO GROUP number--This can be used if you know the log-file group identification parameter.
TO GROUP `filename'--You can also add to the log-file group by specifying the name or names of all members of the existing log-file group.
Other optional parameters to the ALTER DATABASE command include
DROP LOGFILE --This parameter drops all members of a log-file group. You specify the group that you are dropping in one of two ways: by specifying the GROUP or by specifying members of the group as described here.
The various options to the ALTER DATABASE database DROP LOGFILE command are as follows:
GROUP number--If you know the group identifier, you can drop the log-file group by specifying it.
`filename'--You can add to the log-file group by specifying the name or names of all members of the existing log-file group.
Other optional parameters to the ALTER DATABASE command are
DROP LOGFILE MEMBER `filename'--This command is used to drop a member or members of a log-file group. The member to be dropped is specified by the log-file member's filename. One or more members can be specified.
CLEAR LOGFILE --This command is used to drop and re-create a log file. This can be used in the event of a problem with an existing log file. By using the optional UNARCHIVED qualifier, you can clear a log file that has logging information in it without having to first archive that logging information. If you use the UNARCHIVED qualifier, you will probably make your database unrecoverable in the event of media failure. You specify the log files that you are clearing in one of two ways: by specifying the GROUP or by specifying members of the group as described here.
The various options to the ALTER DATABASE database CLEAR LOGFILE command are as follows:
GROUP number--If you know the group identifier, you can drop the log-file group by specifying it.
filename'--You can add to the log-file group by specifying the name or names of all members of the existing log-file group.
UNRECOVERABLE DATAFILE--This option to CLEAR LOGFILES is used if the tablespace has a datafile that is offline. This requires that the tablespace and the datafile be dropped after the CLEAR LOGFILES operation has finished.
Other optional parameters to the ALTER DATABASE command are
RENAME FILE `filename' TO `filename'--This command is used to rename datafiles or log files. This only changes the name in the control file, not on disk.
CREATE STANDBY CONTROLFILE AS`control_file_name'--This command is used to create a standby control file called control_file_name. The optional REUSE qualifier allows you to specify the name of an existing file that will be reused.
BACKUP CONTROLFILE --This command is used to create a backup of the control file.
This can be accomplished in the following two ways.
The various options to the ALTER DATABASE database CLEAR LOGFILE command are as follows:
TO `filename'--By assigning the backup control file to a filename, the control file will be backed up to this file. If the file already exists, the optional REUSE qualifier must be used.
TO TRACE--This optional parameter writes SQL to a trace file that can be used to re-create the control files. You can specify the qualifiers RESETLOGS or NORESETLOGS, which will add SQL to open the database with these options. The SQL statements are complete enough to start up the database, re-create the control files, and recover and open the database appropriately.
Tip: By running the ALTER DATABASE database BACKUP CONTROLFILE TO TRACE command after your database has been altered in any way, you will have a method of re-creating the control files if necessary. This is part of a good recovery plan.
Other optional parameters to the ALTER DATABASE command are
RENAME GLOBAL NAME TO--This command allows you to rename the database name, domain name, or both.
RESET COMPATABILITY--This command resets the compatability level of the database to an earlier version of Oracle after the instance is restarted.
SET--The following trusted Oracle parameters are modified via the SET command SET DBLOW = value, SET DBHIGH = value, SET DBMAC ON or OFF. Trusted Oracle is not covered in this book. See the Trusted Oracle Administration Guide from Oracle for more information.
ENABLE [PUBLIC] THREAD number--This parallel-server command is used to enable a thread of redo log groups identified by number. The addition of the PUBLIC qualifier allows this log file thread to be used by any instance.
DISABLE THREADnumber--This command disables a log file thread group identified by number, making it unavailable to any instance.
CREATE DATAFILE filename'--This parameter is used to create a datafile that was lost due to media failure and was not backed up.
ASfilespec--This option of the CREATE DATAFILE command is used to specify the filespec specification parameters.
DATAFILE filename'--The ALTER DATABASE database DATAFILE command has several different functions that allow you to change the state of database datafiles.
The various options to the ALTER DATABASE database DATAFILE `filename' command are as follows:
ONLINE--Brings the datafile online.
OFFLINE [DROP--Takes the datafile offline. When the database is running in NOARCHIVELOG mode, the drop command takes it offline.
RESIZE number(K or M)--This is used to resize a datafile to number K (kilobytes) or M (megabytes).
AUTOEXTEND OFF or ON--This command is used to alter a datafile to have autoextend either on or off. With autoextend on, the file will increase in size based on the AUTOEXTEND parameters.
The various options to the ALTER DATABASE database DATAFILE `filename' AUTOEXTEND ON command are as follows:
NEXT number (K or M)--This option specifies that the database will grow in increments of number K (kilobytes) or M (megabytes) whenever space requirements force the datafile to grow.
MAXSIZE UNLIMITED--This parameter specifies that the maximum size of the datafile is governed only by disk space and OS datafile limitations. On NT, a datafile can grow to 32GB in size.
MAXSIZEnumber (K or M)--This option specifies that the maximum size a datafile will grow to is number K (kilobytes) or M (megabytes).
Another optional parameter to the ALTER DATABASE command is
END BACKUP --This option specifies that media recovery should not be done when an online backup was interrupted by an instance failure.
The parameters and options to the RECOVER clause are
RECOVER [AUTOMATIC] [FROM `path']
[[STANDBY] DATABASE]
[UNTIL CANCEL] or [UNTIL TIME `time']
" WIDTH="14" HEIGHT="9" ALIGN="BOTTOM"
BORDER="0">;or [UNTIL CHANGE number] or [USING BACKUP CONTROLFILE] ...]
[TABLESPACE tablespace [,tablespace] ....]
[DATAFILE `filename' [, `filename'] ....]
[LOGFILE `filename']
[CONTINUE [DEFAULT]]
[CANCEL]
[PARALLEL parallel_definition]
The various parameters and variables for the RECOVER option are
AUTOMATIC--This qualifier specifies that the recovery process automatically figures out the names of the redo log files that it needs to apply in order to perform media recovery.
FROM path'--This qualifier allows you to specify the location of archive log files. This is useful because you do not always keep the archive log files in the directory where they were originally generated.
STANDBY--This recovers the standby database.
DATABASE--This is the default option. It indicates that the database should be recovered.
UNTIL ?--The UNTIL parameters are very important to the recovery of the database if you are recovering from a software or operator problem. These parameters allow you to recover up until a specific point.
The various options to the ALTER DATABASE database RECOVER UNTIL ?? command are as follows:
UNTIL CANCEL--The database will be recovered until you submit an ALTER DATABASE database RECOVER CANCEL command.
UNTIL TIME time'--This command performs a time-based recovery. It recovers all transactions that have finished until `time'. The qualifier is given in the form `YYYY-MM-DD:HH24:MI:SS'. This can be quite useful if you know when the suspected SQL statement that caused the failure occurred.
UNTIL CHANGEnumber--This performs a recovery up until the last transaction before the system change number.
Other optional parameters to the ALTER DATABASE database RECOVER command are
USING BACKUP CONTROLFILE--This specifies that the recovery should be done using a backup control file.
TABLESPACEtablespace--This performs recovery only on the specified tablespace(s).
DATAFILE filename'--This performs recovery only on the specified datafile.
LOGFILE filename'--This performs recovery using the specified log file.
CONTINUE [DEFAULT--This continues recovery after it has been interrupted. CONTINUE DEFAULT is similar, but uses Oracle-generated default values.
CANCEL--This cancels the UNTIL CANCEL-based recovery.
PARALLEL (DEGREEnumber)--This specifies the degree of parallelism to use during the recovery process. The number of parallel processes is determined by the value of number.
The recovery process is key to the stability of Oracle and your database. This topic is covered in much more detail on Days 16 and 17.
Let's look at a few examples of using the ALTER DATABASE command to perform regular maintenance tasks.
Changing to Use ARCHIVELOG Mode
If you are not running in ARCHIVELOG mode, you are in danger of losing data in the event of a system failure. To alter the database to run in ARCHIVELOG mode, use the following syntax:
ALTER DATABASE logs ARCHIVELOG;
Performing a Timed Recovery
It is sometimes necessary to perform a timed recovery. If a certain SQL statement caused a system failure, you should recover until just before that statement was issued. If a SQL statement that caused data loss was inadvertently run, you can recover until just before that statement was issued. Here is an example of how to perform a timed recovery:
ALTER DATABASE logs RECOVER UNTIL TIME `1999-07-04:15:03:00';
This statement recovers the database until 3:03 p.m. on July 4, 1999.
Open a Closed Database
Databases are often brought up and mounted but not opened for maintenance. To open a closed database, use the following syntax:
ALTER DATABASE logs OPEN;
Backing Up a Control File
Backing up control files is an important operation. Here is an example of how to use ALTER DATABASE to back up your control files:
ALTER DATABASE logs BACKUP CONTROLFILE TO `C:\backup\cntrlLOGS.dbf;
Backing Up a Control File to Trace
Backing up your control file to trace generates a SQL script that can be used to re-create the control file in the event of an emergency recovery. Use this syntax:
ALTER DATABASE logs BACKUP CONTROLFILE TO TRACE;
Followup
Even after the database and datafiles have been created, your job is not over. You must watch the system carefully to make sure that you don't run out of space or other resources. As you saw on Day 4, capacity planning and sizing are not easy jobs. By anticipating and solving problems before they become critical, you will avoid costly setbacks. You must periodically monitor the system from the OS and the Oracle perspectives to avoid these types of problems.
Monitoring the Datafiles
To make sure you are not running out of space, you can use Enterprise Manager's Storage Manager utility. If you click the Datafiles icon on the left, you will see a list of datafiles, the size of each file, and how much it is used on the right. This is a quick and easy way of determining whether you are running out of space in your datafiles. You can manually check this by looking at several different system views and by adding up individual free spaces. The Oracle Storage Manager simplifies this task.
Load Balancing
It is important that you not overdrive any of the disk drives or disk arrays in your system. This can severely hurt your performance. The I/O rates at which your system is running can be monitored with the NT Performance Monitor. I will not spend much time on the Performance Monitor, but I do want to mention a few points that you should watch out for:
Use diskperf--Turn on diskperf by using the NT command diskperf -y. By turning on diskperf, you will see much more information about your disk I/O rates when you run perfmon.
Monitor I/O--Use perfmon to look at PhysicalDisk. Of great importance is the reads and writes per second (throughput) and the seconds/read and seconds/write (latency).
If you see a disk drive or disk array (a disk array looks like a big disk drive to the OS) that has many more I/Os per second per disk than the others, you might have a balance problem.
TIP: The I/Os per disk drive per second should not exceed 60-79 on the data volumes. On an array, divide the number of I/Os by the number of drives to get the I/Os per drive.
A typical disk drive should take about 20-30 milliseconds to read or write to the drive. If your seconds/read or seconds/write is much higher, you are probably overloading your disks.
Useful Transaction Codes - Basis
AL02 Database Alert Monitor
AL03 Operating System Alert Monitor
AL05 Workload Alert Monitor
AL08 Current active users (in system)
AL11 Display operating system file from CCMS
DB01 Exclusive waits in Oracle database
DB02 Database performance; tables and index
DB03 Parameter changes in database
DB05 Analysis of table with respect to indexed fields
DB12 Backup logs
DB13 DBA planning calendar
DB14 DBA logs
OSS1 Online Service System logon
RZ01 Graphical background job scheduling monitor
RZ02 Network graphical display of instance
RZ03 Server status, alerts, maintain operations mode
RZ04 Maintain operations mode and instance
RZ06 Maintain alert threshold
RZ08 CCMS Alert Monitor
RZ10 Maintain system profiles X
RZ11 Display profile parameter attributes
RZ20 Alert Monitor 4.0
RZ21 Maintain settings for Alert Monitor 4.0
SA38 ABAP reporting
SCAM CATT management
SCAT Computer Aided Test Tool
SCC1 Client copy transport X
SCC3 Client copy log
SCC4 Client copy administration X
SCC5 Delete clients X
SCC6 Client import X
SCC7 Client import – post processing
SCC8 Client export
SCC9 Remote client copy X
SCCL Local client copy X
SCMP Table comparison
SCU3 Table history
SE01 Transport organizer
SE03 Workbench organizer: tools
SE06 Set up workbench organizer
SE09 Workbench organizer
SE10 Customizing organizer
SE11 Data Dictionary maintenance X
SE12 Data Dictionary display
SE15 Repository Info System
SE16 Display table content X X
SE17 General table display X
SE38 ABAP editor X
SFT2 Maintain public holiday calendar
SFT3 Maintain factory calendar
SICK Installation check
SM01 Lock transactions X
SM02 System messages
SM04 Overview of users
SM12 Database locks X
SM13 Update terminates X
SM21 System log
SM30 Maintain tables (not all tables can use SM30) X
SM31 Maintain tables X
SM35 Batch input monitoring
SM36 Schedule background jobs
SM37 Overview of background jobs
SM39 Job analysis
SM49 External operating system commands, execute
SM50 Work process overview
SM51 Instance overview
SM58 Error log for asynchronous RFC
SM59 RFC connection, maintain
SM63 Operations mode, maintain
SM64 Event trigger
SM66 Global work process overview
SM69 External operating system commands, maintain
SP00 Spool
SP01 Spool control
SP02 Display output requests
SP11 TemSe (temporary sequential objects) contents
SP12 TemSe administration
SPAD Spool administration (printer setup)
SPAM SAP Patch Manager
SPAU Intersection SAP transport/customer modifications
SPCC Spool; consistency check
SPDD Intersection SAP transport/customer modifications, DDIC
SPIC Spool; installation check
ST01 SAP system trace X
ST02 Buffer statistics
ST03 Workload analysis
ST04 Database performance analysis
ST05 SQL trace X
ST06 Operating system monitor
ST07 Application monitor
ST08 Network monitor
ST09 Network Alert monitor
ST10 Table call statistics
ST11 Display developer trace X
ST12 Application monitor
ST14 Application analysis
ST22 ABAP dump analysis
ST4A Oracle: analyze the shared cursor cache
STAT Local transaction statistics
STMS Transport Management System X
STUN Performance monitoring
SU01 User maintenance X
SU01D Display users
SU02 Maintain authorization profiles X
SU03 Maintain authorizations X
SU10 Mass change to user records X
SU12 Delete ALL Users X
SU2 Maintain user parameters
SU22 Authorization object check in transactions
SU3 Maintain own user parameters
SU53 Display authorization checked values
Time profile in ST03n
To EXECUTE this report
Goto : SE38/SA38 Put the program name “SWNC_CONFIG_TIMEPROFILE “
Then click on execute or press f8 these are the options you will get and you can choose one among these three
1 > display current configuration
2 > Total night hours : time blocks 21:00 to 24:00 and 00:00 to 06:00
this is default selection as well
3 > Calculate all hours separately
Source code : SWNC_CONFIG_TIMEPROFILE
For night hours
wa-systemid = sy-sysid.
wa-paramname = 'ST03_TIMEPROFILE_START_OF_PERIOD'.
wa-paramtype = 'C'.
wa-par_chrval = '000000000000060708091011121314151617181920212121'.
APPEND wa TO parameter_table.
wa-systemid = sy-sysid.
wa-paramname = 'ST03_TIMEPROFILE_END_OF_PERIOD'.
wa-paramtype = 'C'.
wa-par_chrval = '060606060606070809101112131415161718192021242424'.
APPEND wa TO parameter_table.
For calculating each hour separately
wa-systemid = sy-sysid.
wa-paramname = 'ST03_TIMEPROFILE_START_OF_PERIOD'.
wa-paramtype = 'C'.
wa-par_chrval = '000102030405060708091011121314151617181920212223'.
APPEND wa TO parameter_table.
wa-systemid = sy-sysid.
wa-paramname = 'ST03_TIMEPROFILE_END_OF_PERIOD'.
wa-paramtype = 'C'.
wa-par_chrval = '010203040506070809101112131415161718192021222324'.
APPEND wa TO parameter_table.
Adding multiple printing devices in one transport.
1> Create a transport
2> Double click on transport to open it .
3> go to objects tab , click on edit mode to make the changes in transport put
Project ID : R3TR
Object Type : SPDV
Object name : printer name
same way you can copy n paste from excel .
save it , and release the transport .
EarlyWatch Unformatted VB Run time error 6068
Run-time error 6068 Programmatic Access to Visual Basic Project is not trusted while downloading Earlywatch Report from solman
Solution
1 Open the Office 2003 or Office XP application in question. On the Tools menu, click Macro, and then click Security to open the Macro Security dialog box
2 On the Trusted Sources tab, click to select the Trust access to Visual Basic Project check box to turn on access
3 Click OK to apply the setting. You may need to restart the application for the code to run properly
SAP Enhancement Packages
The enhancement package for the SAP ERP application provides new or improved software functionality that you can implement in a modular fashion. There's no need to run a major upgrade project. And, you keep your core software stable while you activate only the new features and technical improvements you want – on your own timetable – to add the most value to your business.
SAP designed the enhancement package for SAP ERP to easily deliver business- and industry-specific functionality, enterprise services bundles, and other functions that enhance and simplify the use of the SAP ERP application through improvements to the user interface and processes.
You can activate the latest enhancement package via the switch framework, which gives you the flexibility to choose what you want to implement, helps isolate impacted objects, and minimizes testing requirements. This rapid, nonintrusive deployment of selected improvements enables you to innovate benefit from superior solution fit without risking side effects to other parts of your applications.
Based on service-oriented architecture (SOA) enhancement package for SAP ERP delivers functionality that streamlines basic business processes, improves employee productivity, sharpens business insight, and enables you to adapt quickly to changing industry requirements. The enhancement package includes collections, or bundles, of enterprise services. Each bundle provides new services, as well as documentation on how the services can help extend and reconfigure processes or groups of related processes. Each bundle includes explanations of relevant processes, groups of processes, and roles, along with descriptions of business objects and tips on how to put the new services to work. SAP is the first software provider to use SOA to deliver new functionality, enterprise services bundles, and simplified user interfaces through enhancement packages. Enterprise services are one of the ways that SAP is implementing a program of service as part of our strategy to provide SOA across all products. SAP customers and partners can suggest ideas for new services and help develop them by participating in the Enterprise Services Community program.
Enhancement packages for SAP NetWeaver 7.0
It provides new innovative functionality on top of an existing SAP NetWeaver 7.0 implementation.You may install an enhancement package for SAP NetWeaver 7.0 optionally. Customers can either stay with the existing core functionality of their SAP NetWeaver 7.0 implementation and apply the regular support package stacks (SPSs) that contain patches but no new features. Or they can add an enhancement package to their SAP NetWeaver 7.0 landscape and benefit from the new functionality contained in it. The patches of the core delivery line will also be contained in the enhancement package delivery line so that customers who opt for an enhancement package can maintain their system easily. The maintenance period for an enhancement package for SAP NetWeaver follows the maintenance strategy of the underlying SAP NetWeaver main release. For SAP NetWeaver 2004, there will be no enhancement packages but the regular support package stack shipment only.
The ramp-up of enhancement package 1 for SAP NetWeaver 7.0 consists of two scenarios:
1> Information Lifecycle Management
Retention Management
Retention Warehouse
2> Adobe Flash Islands for Web Dynpro ABAP
Integration of Adobe Flash Island for Web Dynpro ABAP
Business Intelligence
New features in the Portal for SAP Enhancement Package 1 for SAP NetWeaver 7.0Navigation Cache Setting a Default Property Category
Business Rule Engine written in ABAP
NOTE: Once activated this business functionality can’t be deactivate
Monitoring Deadlocks
SAP Monitor for deadlocks
ST04
A deadlock situation that occurs is logged in the database error log. This happens irrespective of the application whose database transaction is terminated.
Short dump analysis (ST22)
If an SAP work process terminated a database transaction, the system writes an ABAP short dump with the DBIF_RSQL_SQL_ERROR error. You can use transaction ST22 to view and analyze this short dump. In this transaction, there is a very detailed description of the deadlock situation with all parameters that are required for an in-depth analysis. In particular, you can find the terminated ABAP program and the affected program lines. This also reveals the table on which the deadlock occurred.
System Log (SM21)
Transaction SM21 displays the SAP system log. The system logs a deadlock here as an error. The system specifies the SAP work process that is affected. The entry Deadlock occurred includes the text of the database error.
15:47:13 SPO 35 000 SAPSYS BYO Deadlock occurred
15:47:14 SPO 35 000 SAPSYS BY4 Database error 1205 at SEL access to table TSP02
15:47:14 SPO 35 000 SAPSYS BY0 > Transaction (Process ID 1026)
Developer trace file of the work process (ST11 or SM50)
The work process that becomes the victim of a deadlock notes this database error in the developer trace file and includes information about the command that was executed. The affected work process is in the system log. This is an example of an entry:
B ***LOG BY4=> sql error 1205 performing SEL on table TSP02 [dbtran 6980 ]
B ***LOG BY0=> Transaction (Process ID 1026) was deadlocked on lock resources with another process and has been chosen as the dead
How can I avoid deadlocks?
Deadlock
Deadlock occurs when you are inserting or updating a table and the lock is not released for some specific amount of time. To avoid this either use row level lock by creating lock objects, commit work to release the lock.
Locking Protocols
By using logical locks, you can lock single rows or generic areas of database tables. There are write locks and read locks. For one object you can set a maximum of one write lock, but multiple read locks. Read and write locks are mutually exclusive. For this reason, read locks are also called shared locks, and write locks are also called exclusive locks.The locking protocol determines which locks are set for which type of access and at which point in time. For locking to work properly, all application components must agree on the same locking protocol and apply it consistently.
Locking protocol 1
Only write locks are set. If you want to change an object in a transaction, the application requests a write lock for this object at the start of the transaction. If the object is already locked, the application cannot run in change mode, but in display mode only. Since no read locks can be set, objects can always be read (as long as the isolation level of the database allows this, which is usually the case – see above). This locking protocol guarantees the highest level of parallelism and scalability.
Locking protocol 2
Read and write locks are set. When an object is read by the database, a read lock is set. If you want to change an object, a write lock is requested.If a write lock cannot be set because a read or write lock has been set, the application has to decide whether to cancel the transaction or to try again later. The write lock can be requested immediately when you change the object, or shortly before the end of the transaction. The advantage of requesting a write lock at an early stage is that you can inform users in time. On the other hand, the advantage of requesting a write lock at a later stage is that the lock is set for a short time only, which enables a higher level of parallelism and scalability. This is due to the fact that by setting a write lock, you prevent the object from being read in another transaction. If you request a write lock at a later stage, make sure that you request a read lock when you change the object at the latest. This ensures that the object cannot be changed in another transaction.
All applications that access a particular table have to use the agreed locking protocol.
If two applications try to get locks for multiple objects simultaneously but in a different sequence, deadlocks occur. To avoid deadlocks, an exception is thrown if a lock cannot be set.
Database locks are requested for objects and are retained until the end of the relevant database transaction, which can also include several statements. At a COMMIT or a ROLLBACK, the locked objects are then released again.
Lock Types
Shared locks are used for read operations (SELECT) and mean that no changes can be made to the locked object. You may have many shared locks on the same object at the same time. However, no shared locks are used for read accesses in the SAP environment (dirty read).
Exclusive locks are used before changing operations (INSERT, UPDATE, DELETE) are executed and they prevent further accesses of any kind to the locked object. No other lock can be held at the same time as an exclusive lock.
When executing a SELECT FOR UPDATE, the system initially uses an update lock
which, after the SELECT or before the UPDATE, is converted into an exclusive lock. An update lock also allows shared locks to be held in parallel, but does not allow any further update locks or exclusive locks.
The database interface receives database error 1205 for the victim of the deadlock. The SAP system reacts by canceling and rolling back the affected SAP Logical Unit of Work (LUW). The user generally receives a short dump 'DBIF_RSQL_SQL_ERROR' with the text 'SQL error 1205 occurred when accessing table "X" '. In addition to the short dump, the error is logged in the system log and in the developer trace file. The user must decide whether to repeat the transaction or whether the short dump occurred at a non-critical point.An exception to this is if the victim is an SAP work process. In this case, the SAP system repeats the UPDATE several times, and only considers the transaction as failed once the database error has occurred repeatedly.The winners of the deadlock can hold the requested lock that is now free. Apart from a short delay, you will not notice the deadlock situation. The SQL server decides which database transaction is the victim.Deadlocks for the victims in the SAP system have different effects depending on the work process category:
D = Dialog: ABAP Report ends with a short dump (database error 1205). Transaction must be restarted.
B = Batch: Whether it can be restarted depends on the relevant program.
S = Spool: Spool request terminates. The spool request stops in the spool list and can be manually edited later.
U = Update: Since update requests are stored in full in the update tables, update processing enables you to automatically restart the update request. This is controlled by the rdisp/update_max_attempt_no SAP profile parameter, which specifies how often (when processing the same update request during a repeated deadlock) an automatic repetition occurs before finally being cancelled. During the automatic rollback attempts, the SAP enqueue locks are also retained, meaning that no inconsistencies can arise. If a final update termination occurs, the update program terminates with a short dump and the update request is included in the list of terminated updates.
It is possible to assign a deadlock priority to a database connection. This specifies whether a database transaction that is running on the corresponding connection should be a preferred deadlock victim. If a deadlock occurs, the SQL server then selects as a victim the connection that is marked as a preferred deadlock victim under the connections involved.
In the SAP system, every work process category can be preset as a preferred deadlock victim. This can be set by the dbs/mss/deadlock_priority SAP profile parameter (as of Kernel 6.XX. In 4.X kernels the parameter is called dbs/oledb/deadlock_priority, and in the 3.1I kernel it is called rsdb/mssql/deadlock_priority). The parameter contains a list of the work process type standard code letters. By default, the parameter is not set. Another useful setting for the parameter is the value U, which means that update work processes are preferred deadlock victims
NOTE : You can check for the OSS notes 631668, 860475 for more info.
A couple having to do with deadlocks are 84348, 631668, 751203
Licensing j2ee engine
Once the J2EE Engine has been installed,
you can log on, since a temporary license has automatically been installed.You then have to request and install a permanent license from SAP. If you have installed the SAP Web Application Server with ABAP and J2EE, you can follow the licensing procedure used in earlier releases of the SAP Web AS.
The documentation can be found in SAP License.There are two types of SAP licenses – permanent and temporary licenses ·
Permanent License How you go about getting a permanent license from SAP is explained in Requesting and Installing an SAP License.·
Temporary License If your permanent license has expired, as a short term measure you can install a temporary license.In the Visual Administrator choose Server 0 ® Services ® Licensing Adapter. Then choose tab page Runtime ® Installed Licenses and Install subsequent temporary license.This is valid for 28 days. By then you should have installed a permanent license again.Note that you cannot install another temporary license, if the expired license is also a temporary license.A newly installed license does not take affect until the J2EE Engine has been restarted. It is irrelevant whether the license is a permanent or a temporary license. =======================================================
Requesting and Installing the SAP License ===========================================
You need a valid SAP license to log on to the SAP Web AS. When the SAP Web AS is installed, a temporary license is also installed, which you must replace with a permanent license.
Note that with a J2EE+ABAP installation (SAP Web Application Server with ABAP and J2EE), you have to import the ABAP license (see SAP License). You can then ignore this section.
PrerequisitesYou have installed the SAP J2EE Engine and started the Visual Administrator.Procedure...
1. In the Visual Administrator choose Server 0 ® Services ® Licensing Adapter. The system data that you need to request the license from the SAP Service Marketplace appears. ¡ Installation number (if it exists)¡ System ID,¡ System number (if it exists)¡ Hardware key¡ Current release
2. Under the Internet address service.sap.com ® mySAP Business Suite, you can get to the initial page of the license key requests in SAP Service Marketplace. Here you will find all the information you need to request license keys.
3. Enter your e-mail address in the request. The license key will be sent to you promptly by e-mail. Alternatively, you can also download the license key from the SAP Service Marketplace.Do not make any changes to the license key. To import the license key, the file must not have been changed.
4. In the Licensing Adapter in the Visual Administrator choose Install License from File.
5. Select the license file that you want from SAP.Result
The license has been installed.You can view all the licenses installed in your SAP System, by choosing, in the Visual Administrator, Server Licensing Adapter, then tab Runtime Installed Licenses.
Additional InformationFor more information about requesting license keys, see SAP note 94998.
SAP VIRUS SCAN INTERFACE
SAP VIRUS SCAN INTERFACE
You can use the Virus Scan Interface to include external virus scanners in the SAP system to increase the security of your system. This means that you can use a high-performance integration solution to scan files or documents that are processed by applications for viruses. This applies both for applications delivered by SAP and for customer developments, for example, during data transfers across networks or when documents are exchanged using interfaces. This Virus Scan Interface is available for both AS ABAP and AS Java.
Since SAP-managed databases are central distribution points, it is very dangerous to store malformed or otherwise dangerous data in them as this data might spread very quickly across the network. Applications that are transferring files to or from SAP-managed databases must ensure that the data is not vulnerable to any known threats.
The architecture of the Virus Scan Interface allows you to combine different products, systems, and platforms to scan your applications for viruses. This is possible since SAP provides a certified interface for the virus scan products of other vendors. The partners’ virus scan engines can, for example, have completely different architectures. However, by integrating an adapter using a proprietary connection, any partner can connect any existing virus scan product to the virus scan interface.
On the SAP side, different VSILIB layers are used to include the ABAP and Java worlds, and to deal with platform dependencies (of operating systems and processors, that is, 32 or 64 bit) in the integration of the virus scan interface.
Elements of the Virus Scan Interface
The graphic below clarifies the layer structure of the SAP Virus Scan Interface (SAP VSI API) and shows which parts are delivered by SAP, and which by the relevant partners.
Software Layers of the Virus Scan Interface
The SAP Virus Scan Interface (SAP VSI API):
1. Is accessed by partner products directly with the scan engine or indirectly using a separate virus scan adapter.
2. Contains the functions required to configure and to initialize the scan engine.
3. Provides the parameters and data for every virus scan.
4. Processes the check result.
ABAP or Java application programs start virus scans with dedicated classes and methods of the SAP virus scan interface, which, in turn, call a virus scan server or the AS Java directly using RFC.
Virus Scan Profiles
Different applications have different requirements for virus scanning. For example, an HR application dealing with external recruiting forms wants high security scanning whereas performance is not a critical aspect. On the other hand, a CRM application dealing with mostly internal documents wants less scanning effort and better performance. Virus scan profiles are used to allow for application-specific configuration of virus scanning.
Application programs use virus scan profiles to check data for viruses. You can also define which scanner group/groups are to be used to check a document. You can also use a virus scan profile to assign configuration parameters for the virus scanner. If you check for viruses with this virus scan profile, the virus scanner receives the parameters.
Virus scan profiles can point to other profiles (reference mechanism). SAP delivers profiles for its applications, pointing all to the “default profile”. By creating one single virus scan profile and flagging it as the “default profile”, customers can use this profile for all SAP applications without separate configuration.
The system administrator can use the profile to activate or deactivate the virus scan for each component. By default, a virus scan profile is provided for each SAP application that integrates a virus scan.
Testing Your Application
To do this, you must activate the virus scan, then you can test using transaction 'VSCANTEST' (ABAP) or the test application 'http://hostname:port/vscantest' (J2EE).
Activating the Virus ScanAs of SAP NetWeaver, from a technology point of view, you can use the virus scan as of SAP_BASIS 640 for ABAP and J2EE with feature pack (which corresponds to Support Package 7). From an application point of view this means: The virus scan function is available for all SAP solutions that are based on SAP NetWeaver (for example, SAP Business Suite, SAP Business ByDesign).
Before SAP NetWeaver, the virus scan is available in SAP R/3 Enterprise
a) In transaction VSCANPROFILE, you can use the 'Active' indicator to activate or deactivate a profile.
(b) You can use the view cluster maintenance (transaction SM34) to configure the virus scan profile that was delivered. The name of the viewcluster is 'VSCAN_PROFILE_VC'. You can activate or deactivate a profile here using the 'Active' indicator.
J2EE:
You can configure a virus scan profile, and activate/deactivate it in administration by choosing 'Visual Administrator Service'-> 'Virus Scan Provider' -> 'Profiles'.
Need to know how many work process to be configured in SAP ?
SAPinst installs SAP systems with a minimum number of work processes, whichare calculated using the following formula:-
Number of dialog work processes = RAM/256 (min 2, max 18)
Number of update work processes = RAM/768 (min 1, max 6)
Number of update2 work processes = RAM/1024 (min 1, max 3)
Number of batch work processes = RAM/1024 (min 2, max 3)
Number of enqueue work processes = 1
Number of spool work processes = 1
SAP Router Installation
(component XX-SER-NET-OSS-NEW) and tell them to register the hostname and IP of your new SAProuter.
You have to register it with a official IP address (no internal IPsallowed), but it's allowed to use NAT in the firewall/router.
After you've received a confirmation from SAP that your SAProuter hasbeen registered, you are ready to configure your SAProuter.If your SAProuter directory is C:\usr\sap\saprouter, these are the stepsto follow.
Note: You will be asked for a PIN code. Just pick your own 4 numbers, butyou'll have to use the same PIN every time you're asked to enter one.
1. Set 2 environment variables: SECUDIR and SNC_LIB according to theguide you've downloaded.
2. Download the SAP Crypto Library and unpack it intoC:\usr\sap\saprouter
3. To generate a certificate request, run the command:sapgenpse get_pse -v -r C:\usr\sap\saprouter\certreq -pC:\usr\sap\saprouter\local.pse ""
4. Then you have to follow the guide and request the certificate fromhttp://service.sap.com/tcs -> Download Area -> SAProuter Certificate
5. Create a file C:\usr\sap\saprouter\srcert and copy the requestedcertificate into this file. The run the command:sapgenpse import_own_cert -c C:\usr\sap\saprouter\srcert -pC:\usr\sap\saprouter\local.pse
6. To generate credentials for the user that's running the SAProuterservice, run command:sapgenpse seclogin -p C:\usr\sap\saprouter\local.pse -O (this will create the file "cred_v2")
7. Check the configuration by running command:sapgenpse get_my_name -v -n Issuer(This should always give the answer "CN=SAProuter CA, OU=SAProuter,O=SAP, C=DE")
8. Create SAProuter service on Windows with the command:ntscmgr install SAProuter -b C:\usr\sap\saprouter\saprouter.exe -p"service -r -R C:\usr\sap\saprouter\saprouttab -W 60000 -K^p:^"
9. Edit the Windows Registry key as follows:
MyComputer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SAProuter\ImagePath --> Change both ^ to "
10. Start the SAProuter service
11. Enter the required parameters in OSS1 -> Technical Settings
---------------------------------------------------------------------------------
Installation on UNIX ----------------------------------------------
1. Create the subdirectory saprouter in the directory /usr/sap/.
2. Get the latest version of the SAProuter from the SAP Service Marketplace (service.sap.com/patches). Choose Support Packages and Patches ® Entry by Application Group ® Additional Components ® SAPROUTER. The SAProuter is in packet saprouter*.SAR; the program niping is also in this packet. Copy programs saprouter and niping to the newly created directory /usr/sap/saprouter.If you cannot copy the programs from SAP Service Marketplace, you can copy a version (may be obsolete) from your directory /usr/sap//SYS/exe/run.
3. (Optional) If you want to start the SAProuter on the same computer used for an SAP instance, insert the following line into file /usr/sap//SYS/exe/run/startsap:## Start saprouter#SRDIR=/usr/sap/saprouterif [ -f $SRDIR/saprouter ] ; thenecho “\nStarting saprouter Daemon “ tee -a $LOGFILEecho “----------------------------“ tee -a $LOGFILE$SRDIR/saprouter -r -R $SRDIR/saprouttab \ tee -a $LOGFILE &fiInsert the lines before the commands to start the SAP instance.Normally the SAProuter runs on a different computer. If this is so, this step is omitted and you start the SAProuter as described in Starting the SAProuter.
4. Maintain the route permission table in directory /usr/sap/saprouter. If you want to keep it in another directory or under a name other than saprouttab, you must specify this with the SAProuter option -R (see Option R ).This should help in SAP Router configuration and installation.
Exchange Infrastructure ( XI )
Check out the transactions which are frequently used:
SXMB_IFR - Start Integration Builder
SXMB_MONI - Integration Engine (Monitoring)
SXI_MONITOR - XI Message Monitoring
SLDCHECK - test SLD ConnectionSome more...
SLDAPICUST - SLD API Customizing
SXMB_ADM - Administration of Integration Engine
SXI_CACHE - Directory Cache of
XISXMB_MONI_BPE - Process Engine -
MonitoringSome very useful URLs....hostname - is the host name of the server on which XI is runninginst number - is the instance number (eg: 50000,50100 for port 00,01)
http://hostname:number00/rep --- Exchange Infrastructure Tools
http://hostname:5instnumber00/sld --- System Landscape Directory
http://hostname:5instnumber00/rwb --- Runtime Workbench
Some more URLs which can be of use (not in the initial stage)...
http://hostname:5instnumber00/MessagingSystem --- Message Display Tool
http://hostname:5instnumber00/mdt/amtServlet --- Adapter Monitor
http://hostname:5instnumber00/exchangeProfile --- Exchange Infrastructure Profile
http://hostname:5instnumber00/CPACache --- CPA Cache Monitoring
http://hostname:5instnumber00/CPACache/refresh?mode=delta --- Delta CPA cache refresh
http://hostname:5instnumber00/CPACache/refresh?mode=full --- Full CPA cache refresh
Some transaction related to IDOC's as IDOC play a very important role in XI...
WE60 - Documentation for IDoc types
BD87 - Status Monitor for ALE Messages
IDX1 - Maintenance of Port in IDoc Adapter
IDX2 - Meta Data Overview in IDoc Adapter
WE05 - Lists the IDocs
WE02 - Displays the IDoc
WE19 - Test Tool
WE09 - Search for IDocs by Content