Oracle Architectural Components:
Objectives:
This is introduces the Oracle server architecture by examining the physical, memory,process, and logical structures involved in establishing a database connection, creating a session, and executing SQL commands.
Oracle is a Relational Database Management System (RDBMS), which uses Relational Data Model to store its database and SQL (commonly abbreviated as Structured Query Language) to process the stored data. The architecture of Oracle system can be best explained in terms of client/server paradigm. Thus, we will explain the architecture of Oracle server using the structure called instance..
An oracle instance is a complex set of memory structures and operating system processes. It is the Oracle instance, which manages all database activities, such as transaction processing, database recovery, form generation, and so on. The instance structure is loosely styled after UNIX's implementation of multitasking operating system. Discrete processes perform specialized tasks within the RDBMS that work together to accomplish the goals of the instance. Each process has a separate memory block that it uses to store local and private variables, address stacks and other runtime information. The processes use a common shared memory area for processing data concurrently. This memory block is called the System Global Area (SGA). Figure 1 illustrates the architecture of an instance.
System Global Area (SGA)
The SGA is the primary memory component of the instance. It provides memory structure necessary for data manipulation, SQL statement parsing, and redo caching. The SGA is shared, which means that the multiple processes can access and modify the data contained in it in a synchronized manner. The SGA consists of the following components:
- Shared Pool
- Database buffer cache
- Redo Log Buffer
Multithread server(MTS) structures.
The SGA is a read/write memory area that stores information shared by all database processes and by all users of the database (sometimes it is called the Shared Global Area).
o This information includes both organizational data and control information used by the Oracle Server.
o The SGA is allocated in memory and virtual memory.
o The size of the SGA can be established by a DBA by assigning a value to the parameter SGA_MAX_SIZE in the parameter file—this is an optional parameter.
The SGA is allocated when an Oracle instance (database) is started up based on values specified in the initialization parameter file (either PFILE or SPFILE).
The SGA has the following mandatory memory structures:
· Database Buffer Cache
· Redo Log Buffer
· Java Pool
· Streams Pool
· Shared Pool – includes two components:
o Library Cache
o Data Dictionary Cache
· Other structures (for example, lock and latch management, statistical data)
Additional optional memory structures in the SGA include:
· Large Pool
The SHOW SGA SQL command will show you the SGA memory allocations.
· This is a recent clip of the SGA for the DBORCL database at SIUE.
· In order to execute SHOW SGA you must be connected with the special privilege SYSDBA (which is only available to user accounts that are members of the DBA Linux group).
SQL> connect / as sysdba
Connected.
SQL> show sga
Total System Global Area 1610612736 bytes
Fixed Size 2084296 bytes
Variable Size 1006633528 bytes
Database Buffers 587202560 bytes
Redo Buffers 14692352 bytes
Early versions of Oracle used a Static SGA. This meant that if modifications to memory management were required, the database had to be shutdown, modifications were made to the init.ora parameter file, and then the database had to be restarted.
Oracle 11g uses a Dynamic SGA. Memory configurations for the system global area can be made without shutting down the database instance. The DBA can resize the Database Buffer Cache and Shared Pool dynamically.
Several initialization parameters are set that affect the amount of random access memory dedicated to the SGA of an Oracle Instance. These are:
· SGA_MAX_SIZE: This optional parameter is used to set a limit on the amount of virtual memory allocated to the SGA – a typical setting might be 1 GB; however, if the value for SGA_MAX_SIZE in the initialization parameter file or server parameter file is less than the sum the memory allocated for all components, either explicitly in the parameter file or by default, at the time the instance is initialized, then the database ignores the setting for SGA_MAX_SIZE. For optimal performance, the entire SGA should fit in real memory to eliminate paging to/from disk by the operating system.
· DB_CACHE_SIZE: This optional parameter is used to tune the amount memory allocated to the Database Buffer Cache in standard database blocks. Block sizes vary among operating systems. The DBORCL database uses 8 KB blocks. The total blocks in the cache defaults to 48 MB on LINUX/UNIX and 52 MB on Windows operating systems.
· LOG_BUFFER: This optional parameter specifies the number of bytes allocated for the Redo Log Buffer.
· SHARED_POOL_SIZE: This optional parameter specifies the number of bytes of memory allocated to shared SQL and PL/SQL. The default is 16 MB. If the operating system is based on a 64 bit configuration, then the default size is 64 MB.
· LARGE_POOL_SIZE: This is an optional memory object – the size of the Large Pool defaults to zero. If the init.ora parameter PARALLEL_AUTOMATIC_TUNING is set to TRUE, then the default size is automatically calculated.
· JAVA_POOL_SIZE: This is another optional memory object. The default is 24 MB of memory.
The size of the SGA cannot exceed the parameter SGA_MAX_SIZE minus the combination of the size of the additional parameters, DB_CACHE_SIZE, LOG_BUFFER, SHARED_POOL_SIZE, LARGE_POOL_SIZE, and JAVA_POOL_SIZE.
Memory is allocated to the SGA as contiguous virtual memory in units termed granules. Granule size depends on the estimated total size of the SGA, which as was noted above, depends on the SGA_MAX_SIZE parameter. Granules are sized as follows:
· If the SGA is less than 1 GB in total, each granule is 4 MB.
· If the SGA is greater than 1 GB in total, each granule is 16 MB.
Granules are assigned to the Database Buffer Cache, Shared Pool, Java Pool, and other memory structures, and these memory components can dynamically grow and shrink. Using contiguous memory improves system performance. The actual number of granules assigned to one of these memory components can be determined by querying the database view named V$BUFFER_POOL.
Granules are allocated when the Oracle server starts a database instance in order to provide memory addressing space to meet the SGA_MAX_SIZE parameter. The minimum is 3 granules: one each for the fixed SGA, Database Buffer Cache, and Shared Pool. In practice, you'll find the SGA is allocated much more memory than this. The SELECT statement shown below shows a current_size of 1,152 granules.
SELECT name, block_size,current_size,prev_size,prev_buffers
FROM v$buffer_pool;
NAME BLOCK_SIZE CURRENT_SIZE PREV_SIZE PREV_BUFFERS
-------- ---------- ------------ ---------- ------------
DEFAULT 8192 560 576 71244
Shared pool
The Oracle shared pool contains Oracle's library cache, which is responsible for collecting, parsing, interpreting, and executing all of the SQL statements that go against the Oracle database. Hence, the shared pool is a key component, so it's necessary for the Oracle database administrator to check for shared pool contention.
The shared pool is like a buffer for SQL statements. Oracle's parsing algorithm ensures that identical SQL statements do not have to be parsed each time they're executed. The shared pool is used to store SQL statements, and it includes the following components:
Since it is not possible to dedicate separate regions of memory for the shared pool components, the shared pool is usually the second-largest SGA memory area (depending on the size of the db_cache_size parameter). The shared pool contains RAM memory regions that serve the following purposes:
• Library cache – The library cache contains the current SQL execution plan information. It also holds stored procedures and trigger code.
• Dictionary cache - The dictionary cache stores environmental information, which includes referential integrity, table definitions, indexing information, and other metadata stored within Oracle's internal tables.
• Session information – Systems that use SQL*Net version 2 with a multi-threaded server need this area to store session information. Beginning with Oracle, the v$session view contains information related to Oracle*Net users.
The following table lists the different areas stored in the shared pool and their purpose:
* Shared SQL Area - The shared SQL area stores each SQL statement executed in the database. This area allows SQL execution plans to be reused by many users.
* Private SQL Area - Private SQL areas are non-shared memory areas assigned to unique user sessions.
* PL/SQL Area - Used to hold parsed and compiled PL/SQL program units, allowing the execution plans to be shared by many users.
* Control Structures - Common control structure information, for example, lock information.
Database Buffer Cache
The database buffer cache holds copies of data blocks read from the data files. The term data block is used to describe a block containing table data, index data, clustered data, and so on. Basically, it is a block that contains data. All user processes concurrently connected to the instance share access to the database buffer cache. The database buffer cache is logically segmented into multiple sets. This reduces contention on multiprocessor systems.
This area of the SGA contains only the buffers themselves and not their control structures. For each buffer, there is a corresponding buffer header in the variable area of the SGA.
Program Global Area (PGA)
A Program Global Area (PGA) is a memory region that contains data and control information for a server process. It is a non-shared memory region created by Oracle when a server process is started. Access to the PGA is exclusive to that server process and it is read and written only by Oracle code acting on its behalf. It contains a private SQL area and a session memory area.
A private SQL area contains data such as bind information and runtime memory structures. Each session that issues a SQL statement has a private SQL area. Session memory is the memory allocated to hold a session’s variables (logon information) and other information related to the session. For a shared server, the session memory is shared and not private.
Buffer Cache Management
The database buffer cache is organized in two lists: the write list and the least-recently-used (LRU) list. The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk. The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are buffers that are currently being accessed.
When an Oracle process requires data, it searches the buffer cache, finds the data blocks, and then uses the data. This is known as a cache hit. If it cannot find the data, then it must be obtained from the data file. In this case, it finds a free buffer to accommodate the data block by scanning the LRU list, starting at the least-recently-used from the end of the list. The process searches either until it finds a free buffer or until it has searched the threshold limit of buffers.
When the user process is performing a full table scan, it reads the data blocks into buffers and places them on the LRU end instead of the MRU end of the LRU list. This is because a fully scanned table is usually needed only briefly and the blocks should be moved out quickly.
What Is a Dirty Block?
Whenever a server process changes or modifies a data block, it becomes a dirty block. Once a server process makes changes to the data block, the user may commit transactions, or transactions may not be committed for quite some time. In either case, the dirty block is not immediately written back to disk.
Writing dirty blocks to disk takes place under the following two conditions:
* When a server process cannot find a clean, reusable buffer after scanning a threshold number of buffers, then the database writer process writes the dirty blocks to disk.
* When the checkpoint takes place, the database writer process writes the dirty blocks to disk.
Large Pool
Oracle Large Pool is an optional memory component of the oracle database SGA. This area is used for providing large memory allocations in many situations that arise during the operations of an oracle database instance.
1. Session memory for the a shared server and the Oracle XA Interface when distributed transactions are involved
2. I/O Server Processes
3. Parallel Query Buffers
4. Oracle Backup and Restore Operations using RMAN
Large Pool plays an important role in Oracle Database Tuning since the allocation of the memory for the above components otherwise is done from the shared pool. Also due to the large memory requirements for I/O and Rman operations, the large pool is better able to satisfy the requirements instead of depending on the Shared Pool Area.
Usage of a Large Pool Area allows the shared pool to primarily cache SQL and avoid the overhead casued by shrinking the SQL Cache Area
JAVA Pool
The JAVA Pool holds the JAVA execution code in a similar manner to the PL/SQL cache in the shared pool. The JAVA pool is used by many internal routines, such as import and export, and should be sized at approximately 60 megabytes if no other JAVA will be utilized in the user applications
The Redo Log Buffer
The redo log buffer is a RAM area (defined by the initialization parameter log_buffer) that works to save changes to data, in case something fails and Oracle has to put it back into its original state (a “rollback”). When Oracle SQL updates a table (a process called Data Manipulation Language, or DML), redo images are created and stored in the redo log buffer. Since RAM is faster than disk, this makes the storage of redo very fast.
The Oracle redo log buffer provides the following functions within the Oracle SGA:
- Serves for assistance with database recovery tasks
- Records all changes made to database blocks
- Places changes recorded to redo entries for redo logs
The database initialization parameter log_buffer defines the default size of the redo log buffer within Oracle 11g.
Redo log files:
Redo logs are transaction journals. Each transaction is recorded in the redo logs. Redo logs are used in a serial fashion with each transaction queuing up in the redo log buffers and being written one at a time into the redo logs. Redo logs as a general rule should switch about every thirty minutes. However, you may need to adjust the time up or down depending on the importance of your data. The rule of thumb is to size the redo logs such that you only loose the amount of data you can stand to loose should for some reason the online redo log become corrupt. With modern Oracle redo log mirroring and with disk array mirroring and various forms of online disk repair and replacement the occurrence of redo log corruptions has dropped to practically zero, so size based on the number of archive logs you want to apply should the database fail just before your next backup.
The LOG_BUFFER_SIZE and LOG_BUFFERS parameters control the redo log buffers. The LOG_BUFFER_SIZE should be set to reduce the number of writes required per redo log but not be so large that it results in an excessive IO wait time. Some studies have shown that sizing bigger than one megabyte rarely results in performance gains. Generally I size the LOG_BUFFER_SIZE such that it is equal to or results in an even divisor of the redo log size.
Monitor redo logs using the alert log, V$LOGHIST, V$LOGFILE, V$RECOVERY_LOG and V$LOG DPTs.
Control files
Control files are created when the database is created. Control file size is determined by a number of initialization parameters and really shouldn’t concern you. However, if you are using raw devices for control files make sure the raw partition is sized to allow the control file to grow in Oracle8 and Oracle8i databases as backup information is added. Each database instance must have at least one control file of it will not start. Loss of all control files will cause the database to crash. Have at least two copies on physically separate disk arrays or devices.
I suggest maintaining a script to rebuild your control files. The control file rebuild script should be recreated after each physical change to the database. The command CREATE CONTROLFILE is used to manually create a controlfile. However, the ALTER DATABASE BACKUP CONTROL FILE TO TRACE command is the preferred method to generate a script that can be used to rebuild the control file if it becomes damaged.
Initialization File
Although not normally discussed as a database file a database will not startup without a valid initialization parameter file. The file is only read at startup and contains the information required to set up the SGA, control file locations, trace and dump file locations and setup parameters for multi-threaded server, parallel query and Oracle Parallel Server. I suggest keeping a copy in a secure place incase yours ever becomes corrupted, deleted or damaged. The Oracle Reference, which is included in the Oracle8 and Oracle8i documentation set, provides descriptions of all initialization parameters. There are also undocumented initialization parameters that allow specialized behaviors to be configured. Both the documented and undocumented initialization parameters are covered in my book “ORACLE8i Administration and Management” from Wiley and Sons.
It should be obvious that the most important file regarding database setup and operation is probably the INIT.ORA, or initialization file. This file contains the assignments for the database initialization parameters.
NAME
|
VALUE
|
DESCRIPTION
|
O7_DICTIONARY_ACCESSIBILITY
|
TRUE
|
V7 Dictionary accessibility Support (Migration only)
|
ALWAYS_ANTI_JOIN
|
NESTED_LOOPS
|
Always use this anti-join when possible
|
ALWAYS_SEMI_JOIN
|
standard
|
Always use this semi-join when possible
|
AQ_TM_PROCESSES
|
0
|
Number of AQ Time Managers to start
|
AUDIT_FILE_DEST
|
(PD)
|
Destination for audit files
|
AUDIT_TRAIL
|
NONE
|
Enable system auditing
|
BACKGROUND_CORE_DUMP
|
PARTIAL
|
Sets whether SGA is dumped with core file dump, PARTIAL means don't dump SGA.
|
BACKGROUND_DUMP_DEST
|
(PD)
|
Detached process dump directory
|
BACKUP_TAPE_IO_SLAVES
|
FALSE
|
BACKUP Tape I/O slaves
|
BITMAP_MERGE_AREA_SIZE
|
1048576
|
Maximum memory allow for BITMAP MERGE
|
BLANK_TRIMMING
|
FALSE
|
Blank trimming semantics parameter
|
BUFFER_POOL_KEEP
|
0
|
Number of database blocks/latches in KEEP buffer pool
|
BUFFER_POOL_RECYCLE
|
0
|
Number of database blocks/latches in recycle buffer pool
|
COMMIT_POINT_STRENGTH
|
1
|
Bias this node has toward not preparing in a two-phase commit
|
COMPATIBLE
|
8.1.0
|
Database will be compatible with this software version
|
CONTROL_FILE_RECORD_KEEP_TIME
|
7
|
Control file record keep time in days
|
CONTROL_FILES
|
(PD)
|
Control file names list
|
CORE_DUMP_DEST
|
(PD)
|
Destination for core dump files.
|
CPU_COUNT
|
(PD)
|
Number of cpu's for this instance
|
CREATE_BITMAP_AREA_SIZE
|
8388608
|
Size of create bitmap buffer for bitmap index
|
CURSOR_SPACE_FOR_TIME
|
FALSE
|
Use more memory in order to get faster execution
|
DB_BLOCK_BUFFERS
|
8000
|
Number of database blocks cached in memory
|
DB_BLOCK_CHECKING
|
TRUE
|
Data and index block checking overrides events 10210 and 10211
|
DB_BLOCK_CHECKSUM
|
FALSE
|
Store checksum in db blocks and check during reads
|
DB_BLOCK_LRU_LATCHES
|
1|CPU_COUNT/2
|
Number of lru latches
|
DB_BLOCK_MAX_DIRTY_TARGET
|
DB_BLOCK_BUFFERS
|
Upper bound on modified buffers/recovery reads
|
DB_BLOCK_SIZE
|
(PD)
|
Size of database block in bytes
|
DB_DOMAIN
|
WORLD
|
Directory part of global database name stored with CREATE DATABASE
|
DB_FILE_DIRECT_IO_COUNT
|
64
|
Sequential I/O block count
|
DB_FILE_MULTIBLOCK_READ_COUNT
|
8
|
Db blocks read for each IO
|
DB_FILE_NAME_CONVERT
|
NULL
|
Datafile name convert pattern and string for standby/clone database
|
DB_FILES
|
MAXDATAFILES
|
Max allowable # db files
|
DB_NAME
|
(PD)
|
Database name specified in CREATE DATABASE
|
DB_WRITER_PROCESSES
|
1
|
Number of background database writer processes to start
|
DBLINK_ENCRYPT_LOGIN
|
FALSE
|
Enforce password for distributed login always be encrypted
|
DBWR_IO_SLAVES
|
0
|
Number of DBWR I/O slaves
|
DELAYED_LOGGING_BLOCK_CLEANOUTS
|
TRUE
|
Turns delayed block cleanout on or off
|
DISK_ASYNCH_IO
|
TRUE
|
Use asynch I/O for random access devices
|
DISTRIBUTED_TRANSACTIONS
|
(PD)
|
Max. number of concurrent distributed transactions
|
DML_LOCKS
|
4*Trans.
|
Dml locks - one for each table modified in a transaction
|
ENQUEUE_RESOURCES
|
Derived
|
Resources for enqueues
|
EVENT
|
NULL
|
Debug event control
|
FIXED_DATE
|
NULL
|
Fixed SYSDATE value
|
FREEZE_DB_FOR_FAST_INSTANCE_RECOVERY
|
FALSE
|
Freeze database during instance recovery (OPS)
|
GC_DEFER_TIME
|
10
|
How long to defer down converts for hot buffers (DFS)(OPS)
|
GC_FILES_TO_LOCKS
|
NULL
|
Mapping between file numbers and lock buckets (DFS)(OPS)
|
GC_RELEASABLE_LOCKS
|
0
|
Number of releasable locks (DFS)(OPS)
|
GC_ROLLBACK_LOCKS
|
20
|
Locks for the rollback segments (DFS)(OPS)
|
GLOBAL_NAMES
|
TRUE
|
Enforce that database links have same name as remote database
|
HASH_AREA_SIZE
|
2*SORT
|
Size of in-memory hash work area
|
HASH_JOIN_ENABLED
|
TRUE
|
Enable/disable hash join
|
HASH_MULTIBLOCK_IO_COUNT
|
1
|
Number of blocks hash join will read/write at once
|
HI_SHARED_MEMORY_ADDRESS
|
0
|
SGA starting address (high order 32-bits on 64-bit platforms)
|
HS_AUTOREGISTER
|
TRUE
|
Enable automatic server DD updates in HS agent self-registration
|
IFILE
|
NULL
|
Include file in init.ora
|
INSTANCE_GROUPS
|
NULL
|
List of instance group names
|
INSTANCE_NAME
|
NULL
|
Instance name supported by the instance
|
|
|
|
INSTANCE_NUMBER
|
0
|
Instance number
|
JAVA_POOL_SIZE
|
10000K
|
Size in bytes of the Java pool
|
JOB_QUEUE_INTERVAL
|
60
|
Wakeup interval in seconds for job queue processes
|
JOB_QUEUE_KEEP_CONNECTIONS
|
FALSE
|
Keep network connections between execution of jobs
|
JOB_QUEUE_PROCESSES
|
0
|
Number of job queue processes to start
|
LARGE_POOL_SIZE
|
0
|
Size in bytes of the large allocation pool (auto set at 600k)
|
LICENSE_MAX_SESSIONS
|
0
|
Maximum number of non-system user sessions allowed
|
LICENSE_MAX_USERS
|
0
|
Maximum number of named users that can be created in the database
|
LICENSE_SESSIONS_WARNING
|
0
|
Warning level for number of non-system user sessions
|
LM_LOCKS
|
12000
|
Number of locks configured for the lock manager (OPS)
|
LM_PROCS
|
64
|
Number of client processes configured for the lock manager (OPS)
|
LM_RESS
|
6000
|
Number of resources configured for the lock manager (OPS)
|
LOCAL_LISTENER
|
NULL
|
Local listener
|
LOCK_NAME_SPACE
|
NULL
|
Lock name space used for generating lock names for standby/clone database
|
LOCK_SGA
|
FALSE
|
Lock entire SGA in physical memory
|
Oracle Database Structure:
• A database is collection of data files plus programs that manipulate those data files.
• Two types of information are stored in an Oracle database
• User data, relevant to a particular application and
• System data, that the database needs to manage itself
Components:
• Database files – contain all database data
• Control files – contain info to access and manipulate the data
• Redo Logs – record all the transactions on the database
• Tablespaces – logical divisions of a database
• Segments – set of database blocks
• Extents – allocation of contiguous space on disk exp in bytes.
Database files:
• An Oracle database consists of one or more database files.
• These files contain data that can be accessed with the help of SQL.
• A file can be associated with one and only one tablespace.
• The database when created initially, has only one physical file.
• The maximum number of files that can be opened at a time are 32 by default.
• One or more physical files form a logical unit called Tablespace.
Control files:
• A database’s overall physical architecture is maintained by its control files.
• These record control information about all the files within the database.
• A control file is associated with a single database
• Control files are automatically modified by Oracle; a user cannot edit them.
• They are used to maintain internal consistency and guide recovery operations.
Redo Log files:
• A Redo Log contains all the transactions that have occurred against the database.
• No activity in the database can take place without being recorded in Redo Logs.
• The Redo Logs are necessary to protect against data loss.
• Redo Logs are also called transaction logs.
• Every database requires a minimum of two Redo Logs.
• These are used in recovery operations to restore lost or damaged files.
Oracle Processes:
The front-line support for Oracle backup and recovery is the Oracle architecture. One part
of this architecture is the processes related to the Oracle database. Although the Oracle
database has a number of processes, only a few really matter with respect to backup and
recovery and will be mentioned in this text. These processes are as follows:
SMON
PMON
CKPT
LGWR
DBWR
ARCH
User processes
Let’s discuss each of these processes next so you can better understand how they impact
database recovery.
SMON (System Monitor):
SMON (System MONitor) is an Oracle background process created when you start a database instance. The SMON process performs instance recovery, cleans up after dirty shutdowns and coalesces adjacent free extents into larger free extents.
When you startup your Oracle database, Oracle will perform several checks to see if the database is healthy. SMON manages your database for you!
If Oracle finds a problem at startup time, the System Monitor process (SMON) will perform recovery activities. SMON also performs certain types of database maintenance. These activities occur in the background and have no real impact on normal database operations.
Performs multiple maintenance tasks, including the following:
• Creates and manages the temporary tablespace metadata
• Reclaims space used by orphaned temp segments
• Maintains the undo tablespace by on-lining, off-lining, and shrinking undo segments based on undo space usage statistics
• Cleans up the data dictionary when in a transient and inconsistent state
• Maintains the SCN to time mapping table used to support Flashback
PMON (Process Monitor):
Process Monitor process recovers failed process resources. If MTS (also called Shared Server Architecture) is being utilized, PMON monitors and restarts any failed dispatcher or server processes. In RAC, PMON’s role as service registration agent is particularly important.
• The Process Monitor is the janitor of the database, cleaning-up trash left-over from aborted user sessions (for instance, if a client abnormally disconnects from the database). If this should happen, it is the job of the Process Monitor (PMON) process to cleanup after that failure. PMON will cleanup memory areas and other database resources that were in use by that user process. PMON constantly checks the status of user and database processes. In some cases, failed database processes can be restarted by PMON.
LGWR Process
The log writer process (LGWR) is responsible for keeping the online redo logs up-to-date. The
job of the LGWR process is to move redo from the volatile (nonpersistent) redo log buffer in
the System (sometimes called Shared) Global Area (SGA) to the persistence of the online redo
logs. A number of different things cause LGWR to wake up and write the redo data, among
them commits and when the redo log buffer fills to a certain point.
DBWR Process
The database writer process DBWn is responsible for writing to the database datafiles.
This writing occurs during events called checkpoints. A database checkpoint may, in reality,
happen at just about any time while the database is running. DBWR has very little
to do with recovery of the database (other than to support it by writing to the datafiles)
because database datafile writes are often delayed and the blocks within the datafiles themselves
are not consistent with the current state of the data inside of the SGA.
ARCH Process
The archiver process ARCn is responsible for the creation of archived redo logs. In a later
section in this chapter on redo logs, we will discuss how redo logs are filled with redo. Once
the redo log file fills, a log switch occurs and Oracle will begin to write to the next online
redo log. If the database is in ARCHIVELOG mode (see the section “NOARCHIVELOG
and ARCHIVELOG Modes”), the ARCH process will be responsible for taking that filled
archived redo log and copying it to one or more backup locations.
In Oracle Database 11g, the ARCH process starts automatically. Oracle can also start
more than one ARCH process if multiple redo logs need to be archived. For ARCH to work
properly, you will need to configure the appropriate archiving locations (see “Configuring the
Database for Backup and Recovery” later in this chapter for more). The ARCH process is so
vital to backup and recovery of the database that if it cannot copy the archived redo logs to the
mandatory archived log destinations, the database will eventually stall until the problem
is corrected.
User Processes
At first glance, it might seem that the user processes are not all that important to backup and
recovery. As you will see, user processes are actually an integral part of backup and recovery
since you have to be able to connect to the database instance to actually do a database backup
or recovery.
Tablespaces:
• A database is divided into logical divisions called tablespaces
• A database may have one or more tablespaces
• Each logical tablespace corresponds to one or more physical database files.
•The typical tablespace present in an Oracle database are (11g) :
SYSTEM Tablespace:
• Every Oracle database contains a tablespace named SYSTEM, which Oracle creates automatically when the database is created. The SYSTEM tablespace is always online when the database is open.
To take advantage of the benefits of locally managed tablespaces, you can create a locally managed SYSTEM tablespace, or you can migrate an existing dictionary managed SYSTEM tablespace to a locally managed format.
In a database with a locally managed SYSTEM tablespace, dictionary tablespaces cannot be created. It is possible to plug in a dictionary managed tablespace using the transportable feature, but it cannot be made writable.
• The SYSTEM tablespace always contains the data dictionary tables for the entire database. The data dictionary tables are stored in datafile 1.
Note: Once a tablespace is locally managed, it cannot be reverted back to being dictionary managed.
SYSAUX Tablespace:
• The SYSAUX tablespace is always created at database creation. The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. Because it is the default tablespace for many Oracle Database features and products that previously required their own tablespaces, it reduces the number of tablespaces required by the database. It also reduces the load on the SYSTEM tablespace.
You can specify only datafile attributes for the SYSAUX tablespace, using the SYSAUX DATAFILE clause in the CREATE DATABASE statement. Mandatory attributes of the SYSAUX tablespace are set by Oracle Database and include:
- PERMANENT
- READ WRITE
- EXTENT MANAGMENT LOCAL
- SEGMENT SPACE MANAGMENT AUTO
You cannot alter these attributes with an ALTER TABLESPACE statement, and any attempt to do so will result in an error. You cannot drop or rename the SYSAUX tablespace.
Undo Tablespaces:
• Undo tablespaces are special tablespaces used solely for storing undo information. You cannot create any other segment types (for example, tables or indexes) in undo tablespaces. Each database contains zero or more undo tablespaces. In automatic undo management mode, each Oracle instance is assigned one (and only one) undo tablespace. Undo data is managed within an undo tablespace using undo segments that are automatically created and maintained by Oracle.
When the first DML operation is run within a transaction, the transaction is bound (assigned) to an undo segment (and therefore to a transaction table) in the current undo tablespace. In rare circumstances, if the instance does not have a designated undo tablespace, the transaction binds to the system undo segment.
• Each undo tablespace is composed of a set of undo files and is locally managed. Like other types of tablespaces, undo blocks are grouped in extents and the status of each extent is represented in the bitmap. At any point in time, an extent is either allocated to (and used by) a transaction table, or it is free.
Temporary Tablespace:
• When the SYSTEM tablespace is locally managed, you must define a default temporary tablespace when creating a database. A locally managed SYSTEM tablespace cannot be used for default temporary storage.
• If SYSTEM is dictionary managed and if you do not define a default temporary tablespace when creating the database, then SYSTEM is still used for default temporary storage. However, you will receive a warning inALERT.LOG saying that a default temporary tablespace is recommended and will be necessary in future releases.
USER Tablespace:
• information about users is stored