Teach Yourself Oracle 8 In 21 Days -- Appendix A -- Answers


Teach Yourself Oracle 8 In 21 Days


- Appendix A -
Answers

Day 1: "Starting Out with Oracle"

1. A DBMS (Database Management System) consists of the program and utilities used to manage your data. The database is the actual data. A DBMS is used to control and manipulate a database.

2. A DDL (Data Definition Language) statement is used to manipulate or define the storage in the database. These statements are used to create and destroy tables, tablespaces, datafiles, and so on.

3. A DML (Data Manipulation Language) statement is used to manipulate the actual data that is stored in the database. These statements are used to input, change, delete, or retrieve data that is being stored in the database.

4. An OLTP (Online Transaction Processing) system is used to service online users who are inputting, changing, or deleting data from the database. An OLTP system is typically characterized by large numbers of online users.

5. A DSS, or Decision Support System, is used to retrieve valuable decision-making information from a database. The DSS is characterized primarily by read-only queries. The DSS typically has very few users but long-running operations.

6. Duties of a DBA might include software and hardware installation, configuration of new hardware and software, security, performance tuning, backup and recovery, routine maintenance, Troubleshooting, and failure recovery.

7. The most important duty of an Oracle DBA is to maintain the integrity and reliability of the user's data. This means keeping the system running and properly backed up.

Day 2: "Exploring the Oracle Architecture"

1. The three types of files that make up an Oracle database are datafiles, control files, and redo log files.

2.
The Oracle instance consists of the user processes, the Oracle background processes, and the shared memory that they use.

3.
Memory is much faster than disk--sometimes 50 times faster.

4.
New features of Oracle8 include partitioned tables, index-only tables, and improved parallelism.

Day 3: "Installing Oracle Software"

1. The Oracle8 RDBMS is the main component of Oracle8 server.

2. Intelligent agents allow the Oracle Enterprise Manager to communicate with the Oracle RDBMS.

3. Oracle utilities include the SQL*Loader, the Export utility, and the Import utility.

4. TCP/IP, SPX/IPX, named pipes, and DECNet are all protocols supported by SQL*Net.

Day 4: "Properly Sizing Your Database and Planning for Growth"

1. You should set aside 500,000 bytes of memory per concurrent user.

2. The individuals you need at a sizing effort are the database designer, the application designer, and a management-staff representative.

3. A good database block size for an OLTP application would be 2,048 bytes.

4. Any cache-hit percentage above 90%. A cache-hit percentage above 90% means that 90% or more of the reads are found in memory. Logical (memory) reads are much faster than physical (disk) reads.

5. RAID 1 would use a factor of two times the number of writes and RAID 5 would use a factor of two times the number of writes to calculate the additional reads and two times the number of writes to calculate writes.

6. You need to know the database size, the I/Os per second, and the RAID type that you will be using.

7. None. There will always be some page faulting, but you can minimize it.

8. The maximum CPU utilization in a steady-state situation is 75%.

9. The maximum disk utilization in a steady-state situation is 85%.

10. You should allocate 5.5MB for the Oracle8 executable.

Day 5: "Managing the Oracle RDBMS"

1. When a database is created, the SYS and SYSTEM accounts are automatically created.

2. Installing software; creating databases, tablespaces, tables, views, indexes, and accounts; monitoring space and performance; and maintaining the integrity of the data in the database are all duties of the DBA.

3. Implementing a sound backup and recovery strategy.

4. OSDBA and OSOPER are two OS roles created when the database is created.

5. This is the instance that is created under NT with the NT Instance Manager or with ORADIM80.EXE. This service allows you to start and stop the Oracle instance via the Oracle tools.

6. The TNSNAMES.ORA file is where information is kept about what Oracle services you can connect to. It also allows aliasing to occur.

7. The TNSNAMES.ORA file can be administered through the Oracle Network Configuration wizard.

8. The system state can be modified via the ALTER SYSTEM command.

Day 6: "Administering Databases and Datafiles"

1. You should not run more than 60-70 I/Os per disk drive on a standard SCSI disk drive. Pushing more I/Os than this can cause disk latencies to increase beyond recommended limits.

2. The SID is the system identifier. This environment variable is used to determine which database to connect to.

3. No, you can create or alter a data file to autoextend. This is a new feature in Oracle.

4. Yes, the Oracle RDBMS supports more than one instance running on a server.

5. Each SID uniquely identifies one database; therefore you can only create one database per SID. With the Oracle Parallel Server option, you can have multiple instances (and SIDs) accessing the same database, but for each system it appears as though there is a one-to-one relationship between the SID and the database.

6. There is no firm limit to the number of databases that can be created on one system. The system resources that each database consumes will be the limiting factor.

Day 7: "Administering Tablespaces"

1. An extent is the unit of space that a schema object allocates when it is created or grows. As a table increases in size and fills up all its space, another extent is allocated. An extent is a contiguous group of blocks.

2. The limits on the tablespace are OS dependent. A tablespace can consist of 1,022 datafiles. Under Windows NT, a datafile can be 8GB with a 2KB block size or 32GB with a 8KB block size, making the maximum size of a tablespace 32TB.

3. You can create more datafiles or alter a datafile to autoextend. This is a new feature in Oracle.

4. A tablespace can be online or offline.

5. You can have any number of tablespaces in a database. Tablespaces are usually split by function. Accounting might have a different tablespace from finance, and so on.

6. The four types of segments are data, index, rollback, and temporary.

Day 8: "Administering Redo Logs, Control Files, and Rollback Segments"

1. Oracle requires at least two redo log files. This allows the log switch to occur and archiving to happen while logging is also happening.

2. A log file group is a set of identical redo log files. This provides redundancy to protect against media failure.

3. The control file is used to keep information about the structure of the database. When the instance is started, the control file is used to identify the database files.

4. Yes. In fact, it is a good idea to have more than one control file in case of media failure.

5. Without the control file, Oracle will not know which datafiles to open. Oracle cannot start without a control file.

6. The ALTER DATABASE database BACKUP CONTROLFILE command is used to back up the control file. Using the TO TRACE qualifier generates the SQL statements necessary to reproduce the control file.

7. Yes, a log switch forces a checkpoint.

8. No, a checkpoint does not cause a log switch.

9. Just as the name indicates, a rollback segment is used to store change information that can be used in the event of a rollback. It is also used for read consistency.

10. The rollback segment dynamically allocates and deallocates space. If the OPTIMAL value is set, the rollback segment will try to stay that size.

Day 9: "Managing Data"

1. Export is used to place a database's content and structure information into a binary export file.

2. Import is used to take data from an export file and use that data to re-create the database's content and structure.

3. SQL*Loader is a very flexible tool that is used to load ASCII or flat-file data into an Oracle database.

4. Only the Import utility can load export files.

5. Export/Import can be used to back up and reorganize the database, and to transfer data between systems.

6. The Windows NT version of the Export program is called EXP80.

7. The Windows NT version of the Import program is called IMP80.

8. The Windows NT version of the SQL*Loader program is called SQLLDR80.

9. The fixed record load file has data in columns where the same column in each record has the same length. The variable record datafile has different records where the same column may be a different length.

10. The parameter file is a file that contains additional command-line parameters.

Day 10: "Administering User Accounts"

1. No, the user accounts are individual. Sometimes the DBA might be auditing the system and will be able to tell what users have been active and when. If many people use the same account, the DBA cannot tell who is using the system.

2. Each user should have only one account. In cases where a user is serving dual duties, you might want to assign individual accounts for each duty.

3. System resources are assigned through profiles. You would set the CPU per session through a profile.

4. Permissions to access certain utilities or tables are assigned through system privileges. This may be assigned through a role also.

5. The Admin option allows you (as the grantee) to grant a role or system privilege to other users or roles.

6. Roles are removed via Enterprise Manager, Security Manager, or the DROP ROLE command.

7. Privileges are removed from roles via either Security Manager or the REVOKE command.

8. No. Quotas are assigned to a user directly. A quota is given to a user based on tablespaces.

9. Temporary tablespaces as well as default tablespaces are assigned to a user. These values can be modified via Enterprise Manager, Security Manager, or the ALTER USER command.

10. CREATE LIKE is a shortcut that copies a user's properties to a blank user. This lets you copy users as templates. It is a very nice feature to use.

Day 11: "Managing Processes"

1. The DBWR (database writer) process is used to handle all the writes to disk. Whereas the shadow processes can actually read directly from the datafiles if necessary, only the DBWR can write to the datafiles.

2. The LGWR (log writer) process is used to handle all the log writes. The server processes log write information into the log buffer, and the log writer reads from the log buffer and writes to disk.

3. The CKPT (checkpoint) process is used to help the DBWR process when enabled.

4. If you are running with the Oracle Parallel Server option, you'll have LCKn processes running in your system.

5. A three-tier system includes a middleware piece. This middleware piece is typically a transaction monitor (TM).

6. The I/O subsystem is typically the limiting factor in your system.

7. The order of precedence is as follows: Hints are used first, then table definitions, and finally, if none of those are used, the parameter file is used.

8. No, even if you have only one CPU, you'll see a benefit from using the Parallel Query option. Without the parallel query, your CPU might be mostly idle, waiting for I/Os to complete.

9. Under Windows NT, the Oracle server uses threads. In most of this book, however, the term process refers to both processes and threads.

10. The multithreaded server takes most of its memory from the shared pool.

Day 12: "Working with Tables, Views, and Synonyms"

1. A table column represents the order and type of data that is in an Oracle record. The column is the placeholder of data types for each record.

2. A row in a table is an individual record. A row is the specific data that has been added to the database.

3. A row piece consists of a row header and the row data.

4. An object consists of an attribute and a method.

5. A nested table is a table that appears as a column in another table.

6. A partitioned table is a table where the data is divided into smaller pieces based on the data itself.

7. Oracle8 supports range partitioning. The data is divided into various chunks based on ranges of data in one or more columns.

8. A view is a logical representation of a subset or superset of information in schema objects.

9. When creating a table, the STORAGE clause is used to define how the table will grow. Because tables are made of extents, the STORAGE clause defines the extent growth parameters.

10. A synonym is simply an alias to another object in the database. This object can be a table, view, sequence, or program unit.

Day 13: "Using Indexes and Sequences"

1. There are no limits to the number of columns or combinations of columns that can be indexed. However, the more indexes you have, the more overhead there is in maintaining them.

2. Yes, the parallel-index creation facility can greatly enhance the performance of the index-creation operation.

3. A B*-tree index is a binary tree structure that can be used to find data quickly. A binary comparison is made and the tree is traversed based on that selection.

4. Yes. In fact, there are two ways to partition an index. A local partitioned index is an individual index that resides on a partition and indexes the data on that partition, whereas a global partitioned index is a single index over all of the partitioned data. A global partitioned index can in turn be partitioned itself.

5. An index-only table is an index where the column data resides in the leaf block of the index. This schema can be very useful under certain conditions.

6. A bitmap index is an index where the index values are kept in a bitmap. A 1 indicates that the key value is present; a 0 indicates that the key value is not present.

7. A sequence can be used to provide new account numbers, order numbers, invoice numbers, and so on. Any number that must be unique is a good candidate for a sequence.

8. A sequence can be as many as 38 digits in length--bigger than you will ever need.

9. An index does not need to be unique, but it can be.

10. An index can be used to enforce uniqueness on a column or set of columns.

Day 14: "Using Oracle Clusters, Stored Procedures, and Database Links"

1. If two or more tables are always accessed via a join operation, these tables are ideal candidates for clustering.

2. Criterion 1: The cluster key value is unique. Criterion 2: The majority of queries are equality queries on the cluster key.

3. By using a cluster on a set of tables that are primarily accessed via a join operation, the data from all the tables that will be used together is read into the SGA together, thus reducing I/O and improving performance.

4. Clustering is not a good idea if you are doing a lot of inserts. Because the data is stored together inserts cause more overhead than an insert to a standard table. Also, if you do not usually access the data with joins on the cluster key in the cluster, you will see no benefit from clustering.

5. A stored procedure is a set of SQL statements that are pre-parsed and stored in the database. When the stored procedure is invoked, only the input and output data is passed; the SQL statements are not transferred or parsed.

6. By having the SQL pre-parsed and stored in the database, less data needs to be transmitted, less work is done by Oracle, and a better library cache hit rate is achieved, all resulting in better performance.

7. Stored procedures typically perform more than just database operations; conditionals and logical operations are usually a big part of stored procedures.

8. Database links are used to simplify access to a remote database and to hide network details from the user. When you have provided a link, users can easily select data from a database on another server elsewhere in the network.

Day 15: "Managing Job Queues and Using Oracle Auditing"

1. The job queues are used to defer the execution of database activity. This can be done in order to schedule repeating activities or to defer activities until off-hours.

2. Various operations such as backups, exports, imports, and SQL statements can be scheduled.

3. OS operations can also be scheduled using the job queue.

4. It is not necessary for a new account to be created, but it is not a bad idea.

5. The NT user right "login as a batch job" must be enabled for the user who will be running the batch queue.

6. Yes, queued jobs can run on any system that is properly configured for the Enterprise Manager.

7. Jobs can be scheduled to run once, or at a regular interval that can be daily, weekly, or monthly.

8. You need to make sure that the preferred credentials (account information) are properly set up in the Enterprise Manager in order for job queuing to work.

9. Auditing might need to be enabled if you suspect some suspicious behavior on your system.

10. Auditing can consume a lot of system resources. Be careful what you audit and for how long.

Day 16: "Understanding Effective Backup Techniques"

1. An instance recovery is when Oracle recovers from an unorderly shutdown. Instance recovery is automatic as long as no datafiles are damaged.

2. Log file losses cannot be recovered from but do not cause system failure. If a log file is damaged and the datafiles are okay, you should shut down your system and perform an immediate backup.

3. Yes, if a datafile is damaged it can be restored from a backup, and the changes can be restored from the redo log files and archive log files.

4. If you are not running in ARCHIVELOG mode, you can perform an instance recovery but you cannot recover any changes that are older than the online redo log files. If a datafile is damaged, you cannot recover to this point, only to the last full backup.

5. The different types of backups that can be done are full backups, tablespace backups, archive log backups, control file backups, and datafile backups.

6. To completely back up your system, you must back up the datafiles and the control files. If you are running in ARCHIVELOG mode, you should also back up the archive log files.

7. RAID stands for Redundant Array of Inexpensive Disks. This hardware or software allows you to stripe, mirror, or provide other fault-tolerant striping to your disks.

8. No, the NT Backup Manager can only back up the local system.

9. The archive log files are essentially a copy of the redo log file after a log switch. When a log switch occurs, the old redo log file is copied to an archive log file.

10. An offline backup occurs when the Oracle instance is shut down. An online backup occurs when the instance is running.

Day 17: "Recovering the Database"

1. Types of recovery operations include instance recovery, media recovery, and point-in-time recovery.

2. All committed transactions can be recovered from an instance failure.

3. All committed transactions can be recovered from a media failure if you are running in ARCHIVELOG mode.

4. ARCHIVELOG mode causes all redo log files to be saved whenever a log switch occurs. These archived log files can be used to restore the database if necessary.

5. A dirty buffer is a buffer in the SGA that has been modified but has not been written to disk.

6. A checkpoint is a process that periodically causes all dirty buffers to be written to disk.

7. The database and online redo log files are necessary for instance recovery.

8. The database, the online redo log files, the backup files, and the archive log files are all necessary for media recovery.

9. A point-in-time recovery operation recovers only up to a specified point in time. This time can be specified as a time or a system change number (SCN).

10. A point-in-time recovery is used to recover up to a time just before a software or operator problem corrupted the database. This prevents the recovery process from running the command that corrupted the database in the first place.

Day 18: "Administering Oracle Replication"

1. A read-only table snapshot is a picture of a database or table that is copied to another system. The system from which the copy is made is called the master. The replicated system has the table or database available only in read-only mode.

2. An updateable snapshot is a replication method where both the master and the replication systems are updateable. The replicated systems synchronize themselves on a regular basis.

3. Realtime data replication causes all systems to be updated immediately after the change is made. This replication method needs all systems to be up and running for it to work effectively.

4. The master site is the system from which the data is replicated. Because it is updateable and holds the original data, it is considered the master.

5. The snapshot site is the system that receives the snapshots. If the snapshot site is read-only, no updates are allowed and the site is considered a slave site.

6. Read-only table snapshots can be used in a variety of applications: retail price lists, lookup tables, and so on.

7. Advanced replication can be used for sites that need remote updateable databases and as a failover system.

8. With a complete refresh, the entire snapshot is updated. With a fast refresh, only the changes are updated.

9. A standby database is a database that is constantly in recovery mode, recovering archive log files from the primary database. In the event of a failure, the standby database can immediately substitute for the primary database. The standby database is used to provide immediate restoration of service in the event of a primary system failure. In a matter of minutes or even seconds, this system can be operational.

10. A read-only tablespace can be used to prevent users from updating critical data as well as to reduce recovery time, because no instance recovery is needed on a read-only tablespace.

Day 19: "Advanced Oracle Options"

1. The Oracle Parallel Server option can be used as a fault-tolerant RDBMS solution, a performance-enhancement option, or both.

2. The server interconnect provides two functions: to communicate locking information and to act as a system heartbeat.

3. The shared-disk subsystem is what allows OPS to work. Each member of the cluster must be able to access all datafiles and redo log files at all times. Thus, the disk subsystem is shared.

4. DLM stands for Distributed Lock Manager. The DLM passes lock information between members of the cluster.

5. PCM stands for Parallel Cache Management. This term is used primarily to describe the Parallel Cache Management (PCM) locks. These locks are used to keep the various nodes in the cluster from overwriting each other by locking data that is being modified.

6. The Parallel Query option is used to parallelize certain SQL operations in order to improve performance.

7. The Parallel Query option improves performance by allowing multiple threads of execution to perform one task. Because the operation is split, the CPU(s) can continue processing while the system waits for I/Os to complete.

8. The best way to set the degree of parallelism is via a hint in the SQL statement.

9. The secondary way to set the degree of parallelism is to set the parallelism on the table with the ALTER TABLE command.

10. Parallelizable operations include table scans, joins, and the recovery operation.

Day 20: "Effectively Tuning and Optimizing the Database"

1. System memory, CPU, and the I/O subsystem are all potential areas of bottlenecks.

2. You can either query the V$ tables directly or use the UTLBSTAT and UTLESTAT tools to determine the shared pool cache-hit ratio.

3. I recommend that a disk drive doing random I/Os should not be pushed harder than 60-70 I/Os per second.

4. I recommend that a disk drive doing sequential I/Os should not be pushed harder than 90-100 I/Os per second.

5. Latency refers to the length of the response time. Disk latency refers to how long it takes for an I/O to complete. Remember the knee of the curve theory from Day 4.

6. The buffer cache-hit ratio is determined from this equation:
Cache hit ratio = 1 - (PHYSICAL READS / (DB BLOCKB GETS + CONSISTENT GETS))

7. You can determine whether the shared pool is large enough by examining both the data dictionary cache-hit ratio and the library cache-hit ratio.

8. The effectiveness of an index depends on the index being present and the application forming SQL statements that can take advantage of that index.

9. Always run UTLBSTAT.SQL first. This script sets the system up to calculate data with UTLESTAT.SQL.

10. If you have a system that is performing DSS-type queries and large table scans, you could benefit from a large block size. Also, if you have large rows, you might benefit from a larger block size.

Day 21: "Exploring the Web Publishing Assistant and Network Computing Architecture"

1. NCA stands for the Network Computing Architecture and is a standard for computing over the network. The NCA was developed in conjunction with Oracle.

2. CORBA stands for the Common Object Request Broker Architecture and is a specification for object communications.

3. HTTP, or Hypertext Transport Protocol, is the protocol used in order for Web browsers to communicate with Web servers.

4. HTML, or Hypertext Markup Language, is the language used to define Web pages.

5. A cartridge is a plug-in object that performs an application function. Cartridges usually serve one function but work together with other cartridges to serve many functions.

6. A static Web page is one that does not dynamically change. It only changes when it is replaced.

7. The Web Publishing Assistant can create only static Web pages.

8. NC stands for Network Computer. This computer is designed to access the World Wide Web and run network applications.

9. The Oracle Universal server is designed to handle all traditional data types and new data types such as video, audio, text, and spatial data.

10. Any type of application that requires access to a central server would be suitable for running on an NC because no local data storage is required.





macmillan computer publishing usa

© Copyright, Macmillan Computer Publishing. All rights reserved.



Teach Yourself Oracle8 in 21 Days
Teach Yourself Oracle8 in 21 Days (Sams Teach Yourself...)
ISBN: 0672311593
EAN: 2147483647
Year: 1997
Pages: 289

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net