Part Number: AAQFAMCTE
February, 1997
Revision/Update Information: This is a revised manual.
Operating System and Version: DIGITAL UNIX V3.2C through 3.2G
Software Version: Version 1.5
February, 1997
ã Perceptics Corporation 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995
Portions ã Digital Equipment Corporation, 1997. All rights reserved.
© Digital Equipment Corporation 1997. All rights reserved.
The information in this publication is subject to change without notice and should not be construed as a commitment by Digital Equipment Corporation. Digital Equipment Corporation assumes no responsibility for any errors that may appear in this document.
Possession, use, or copying of the software described in this publication is authorized only pursuant to a valid written license from Digital Equipment Corporation, an authorized sublicensor, or the identified licensor.
Digital Equipment Corporation makes no representations that the use of its products in the manner described in this publication will not infringe on existing or future patent rights, nor do the descriptions contained in this publication imply the granting of licenses to make, use, or sell equipment or software in accordance with the agreement.
The following are trademarks of Digital Equipment Corporation: Alpha, DEC, DECchip, Digital, DIGITAL, DIGITAL UNIX, StorageWorks, ThinWire, and the DIGITAL logo.
The following are third-party trademarks: LaserStar and LaserWare and WORMS-11 are registered trademarks of Perceptics Corporation. UNIX is a registered trademark licensed exclusively by X/Open Company Ltd. Microsoft and MS-DOS are registered trademarks of Microsoft Corporation.
All other trademarks and registered trademarks are the property of their respective holders.
Table of Contents
DIGITAL Optical Storage Management Software
Description of the OSMS Package
Using Optical Storage Management Software (OSMS)
Installation and Configuration
Disk Space and Memory Requirements
Rebuilding the DIGITAL UNIX Kernel
Verifying OSMS Installed Correctly
Registering OSMS Using LMF PAK
Restarting the DIGITAL UNIX Kernel
Verifying Optical Hardware Configuration
Restrictions and Unsupported Utilities
Jukebox Mount Utility (jmount)
Optical File System Check Utility (ofsck)
Optical File System Daemon (ofsd)
Optical File System File Access (ofile)
Optical File System Find Utility (ofind)
Optical File System Index Map Utility (omap)
Optical File System Link Utility (olink)
Optical File System Mounting Table (ofstab)
Optical File System Mount Utility (omount)
Preface
Purpose
This guide describes how to install and operate the DIGITAL Optical Storage Management Software (OSMS) package.
Audience
This guide is for system managers and others who perform operations and system management tasks.
Structure of this Guide
This guide is organized in the following manner:
Chapter 1, DIGITAL Optical Storage Management Software - Introduces the OSMS software, describes the specifications and operations, and the device drivers. Provides the technical specifications and the Optical File System (OFS) structures and implementation descriptions.
Chapter 2, Description of the OSMS Package - Lists the directories created and the system files that are saved and modified as a result of the installation and provides software file descriptions.
Chapter 3, Using Optical Storage Management Software (OSMS) - Provides information regarding the naming conventions for jukebox resources.
Chapter 4, Installation and Configuration - Provides software installation, configuration, and reconfiguration procedures. Also included are the rebuilding and restarting procedures for the DIGITAL UNIX kernel, and the kit removal procedure.
Chapter 5, Restrictions and Unsupported Utilities - Describes the restrictions and unsupported utilities of the current software release.
Chapter 6, Utility Descriptions - Describes each of the OSMS utilities and their related command option switches.
Conventions
The following conventions are used in this guide:
Convention | Description |
UPPERCASE and lowercase |
The Digital UNIX system differentiates between lowercase and uppercase characters. Literal strings that appear in text, examples, syntax descriptions, and function descriptions must be typed exactly as shown. |
user input |
This bold typeface is used in interactive examples to indicate typed user input. In text, this typeface is used to introduce new terms. |
system output |
This typeface is used in interactive and code examples to indicate system output. In text, this typeface is used to indicate the exact name of a command, option, partition, pathname, directory, or file. |
% |
The default user prompt is your system name followed by a right angle bracket (>). In this manual, a percent sign (%) is used to represent this prompt. |
# |
A number sign is the default superuser prompt. |
Ctrl/X |
In procedures, a sequence such as Ctrl/X indicates that you must hold down the key labeled Ctrl while you press another key (X) or a pointing device button. |
This chapter introduces the DIGITAL Optical Storage Management Software (OSMS), provides the technical specifications, the Optical File System (OFS) structures and implementation descriptions, and provides a brief description of the OSMS drivers.
DIGITAL Optical Storage Management Software
The OSMS package is an implementation of the Optical File System (OFS), which is designed to support optical jukeboxes in the UNIX operating environment. It is comparable to and compatible with the standard UNIX File System (UFS) within the constraints of a Write-Once Read-Many (WORM) device. OFS supports both WORM and rewritable media.
Any program that runs under the standard UNIX File System (UFS) with a magnetic disk will run with a WORM device using OSMS. This includes all the standard UNIX utilities (such as cd, cp, mv, ln, ls, ex, vi, cc, as, ar, and ld) and system calls (such as creat, link, mkdir, chdir, chmod, chown, chgrp, open, read, write, close, rmdir, and unlink).
Programs and libraries may reside on the optical disk. Optical File Systems may be exported, remotely mounted, and accessed through the Network File System (NFS) in a similar manner to native UNIX File Systems.
Function: File system software package supporting the OFS structure.
Environment: Operates with DIGITAL UNIX systems and servers running the DIGITAL UNIX operating system, releases 3.2C, 3.2D, 3.2F, and 3.2G. Refer to the OSMS Software Product Description (SPD) for various systems, jukeboxes, and controllers supported.
Compatibility: Provides transparent access using standard UNIX utilities and file system library calls (open, close, seek, read, and write) from user programs. Exclusions and restrictions are listed in Chapter 5, Restrictions and Unsupported Utilities.
File Structure: OFS, emulating standard UNIX File System (UFS) structure. The structure of OFS is similar to UFS with alterations and extensions to support the write-once nature of optical media (supports writing and reading of files, directories, hard and soft links, and remote access).
Equipment: OSMS requires an Alpha workstation or server with an external SCSI bus connection and an optical disk drive that supports virgin block detection (on write-once volumes only). Refer to the OSMS Software Product Description (SPD) for details on supported equipment configurations.
Distribution: Distribution media are available on CD-ROM.
Considerations: Files may be rewritten to a WORM optical disk without regard to its write-once nature. However, frequent modifications to these files will cause many blocks to be replaced. On a write-once optical disk, the space occupied by these superseded blocks cannot be reclaimed. Therefore, space utilization on a write-once disk will be improved if it is used primarily as an archival medium.
This section describes the OFS structures and implementation.
The standard UNIX File System (UFS) maintains several data structures in each partition on block-structured (disk) devices. Some structures, such as the superblock and the bad block list, contain partition-specific information, and others, such as the index node (inode) list, contain file-specific information. The rest of the partition is used to store file data.
Some of the standard UFS structures, such as the superblock and the inode table, can be readily modified as files are added to the partition. Therefore, write-once optical disks may not utilize all the standard UFS structures without alterations.
The OFS structure implemented by OSMS uses the standard UFS structures as much as possible and modifies the others only as necessary. Directories use the standard UFS structure. Index blocks are the same as inodes except for the data pointers. The superblock is not used.
An optical volume, like a standard UFS partition, comprises two distinct areas: one area for control structures and one area for data. Unlike the UFS structure, which allocates a fixed amount of space at the beginning of the partition for control structures (superblock and inode list), the OFS structure places the index area at the end of the volume and the data area at the start. This allows both areas to grow toward each other, such that neither area can become completely filled while any space remains unused in the other.
The main difference between the standard UFS structure and the OFS structure employed by OSMS is the way file index information is handled. Each UFS file has an index node (inode) containing information about the file, such as its name, owner, length, and where the data is physically located on the disk.
Any time a file is modified, the inode is updated. On an optical disk, this means that the block containing the inode must be completely replaced.
Files may also have indirect pointer blocks associated with them to locate data blocks. For very large files, indirect pointers may be nested as many as three levels deep, so a single change to a file might entail replacing five different blocks (one data block, three pointer blocks, and the inode). Therefore, the OFS structure replaces the UFS data location mechanism and extends the UFS structure with additional items to keep track of active file index blocks and optimize file lookup and volume mount performance.
OSMS is implemented as a background process (daemon) and a pseudodevice driver. Figure 1-1 shows the OSMS interface to the DIGITAL UNIX file system.
Figure 1-1 OSMS to DIGITAL UNIX Interface
There is one OFS daemon (ofsd) for each optical drive in the system. The OFS daemons run as user level (context) processes; they are not kernel processors. One daemon is dedicated to one specific drive to handle operations specific to that drive that are generated as Virtual File System (VFS) requests.
OSMS operates transparently to UNIX applications through the DIGITAL UNIX VFS interface. No modification to DIGITAL UNIX is required to install OSMS other than normal system configuration. OSMS intercepts VFS requests for the optical disk and routes them to the OFS control process, which performs the appropriate operations on the optical disk.
When a file is created or modified, data and index information destined for the optical disk is temporarily stored in cache buffers in memory. Data is written to the optical disk when the buffer is filled or the file is closed. When data is written to the optical disk, the file index is updated. Whenever the file index changes, a copy is saved in a file on magnetic disk called the index cache file. This buffering permits the file index to be modified during the writing process without using space on the optical disk.
If the system crashes with files open for writing on the optical disk, some or all of the data in those files may be lost. This is often the case for normal UNIX magnetic disk files.
A directory on the optical disk is treated similarly to any other file. However, whenever a file is created, moved, linked, or deleted, the directory in which that file resides is implicitly modified. To ensure that directory entries cannot be lost in the event of a system crash, a special technique is employed to store directory data in the file index so it can be saved in the index cache file. This method is also used for symbolic links and other very short files.
In addition to improved directory security, this scheme reduces the space needed to hold such files and shortens the seek time while traversing the directory tree.
When a WORM optical disk is mounted, the index area is scanned and a list of active files is compiled. The index cache file is then consulted to identify any files that may have been modified but whose indices had not been updated on the optical disk. Such files are automatically restored to the state they were in when their indices were saved in the index cache file. This includes all data written to the files up to that time.
All active file indices are updated on the optical disk before it is demounted. Therefore, as long as the index cache file is intact, the optical disk file structure and all closed files are secure.
This section describes the related device drivers and their capabilities.
The optical driver is a UNIX pseudodevice driver designed to operate optical disk drives through the DIGITAL UNIX USCA cam optical driver for DIGITAL UNIX releases 3.2C, 3.2D, 3.2F, and 3.2G. The driver performs standard open, close, read, and write operations on optical disk drives, as well as special control functions to sense optical media type and capacity.
These drivers are named od0, od1, and so on, one for each driver. They are located in the /dev directory.
The OFS driver is a hybrid module comprising a special UNIX pseudodevice driver and a set of Virtual File System (VFS) operations. It is compatible with the Virtual File System in DIGITAL UNIX releases 3.2C, 3.2D, 3.2F, and 3.2G, and supports standard open, close, read, and write operations.
The pseudo device drivers are named of0, of1, and so on, one for each drive. They are located in the /dev directory.
The driver functions as a communication path linking UNIX kernel VFS functions with the OFS daemon.
One end of this path comprises the VFS operation set, which is attached to the mount point virtual node (vnode) when an Optical File System is mounted. The other end is represented by the pseudodriver read and write functions used by the OFS daemon to obtain VFS requests and return responses.
For jukeboxes operated through a serial control link, the jukebox driver acts as a pseudodriver to communicate jukebox operation requests originating either with the OFS or the jukebox control utility to the jukebox daemon operations are requested by passing a jukebox command structure to the pseudodriver ioctl function. The jukebox daemon uses the read and write functions to obtain requests and return responses.
For jukeboxes controlled directly on the SCSI bus, the jukebox driver operates the jukebox through the DIGITAL UNIX USCA cam changer driver. The driver performs jukebox control operations through the driver ioctl function. The driver read and write functions serve no purpose for direct SCSI jukebox control.
Description of the OSMS Package
This chapter describes various files that are supplied as part of the DIGITAL Optical Storage Management Software (OSMS) as well as system files that are modified during installation. Directories are created as necessary to install the files.
During the installation the following directories are created:
System Files Saved and Modified
The following files are modified. Each file is saved in the directory where it exists by adding a suffix to the filename as shown below before modification. During deinstalling of OSMS, the saved files are restored as original files; the kernel has to be rebuilt after deinstallation. This can be done on an error condition or abnormal termination during installation of OSMS.
File Modified | File Saved as |
/etc/inittab | inittab.preOSMSVFS |
/sbin/update | update.preOSMSVFS |
/usr/sys/data/cam_data.c | cam_data.c.preOSMSCAM |
/usr/sys/include/io/cam/pdrv.h | pdrv.h.preOSMSCAM |
/usr/sys/include/io/cam/cam_debug.h | cam_debug.h.preOSMSCAM |
/usr/sys/include/io/common/devio.h | devio.h.preOSMSCAM |
/usr/sys/io/cam/cam_config.c | cam_config.cpreOSMSCAM |
/usr/sys/vfs/vfs_conf.c | vfs_conf.c.preOSMSVFS |
/vmunix | vmunix.preOSMSVFS |
/usr/sys/BINARY/kern_lmf.o | kern_lmf.o.preOSMSOSF |
/usr/sys/BINARY/vfs_syscalls.o | vfs_syscalls.o.preOSMSOSF |
The OSMS package contains the files specified and described in Table 2-1.
Table 2-1 Software File Description Summary
Files | Content | Description | ||
OFS Startup and Shutdown Script | ||||
/sbin/init.d/ofs | Script to be executed at the start and shutdown /usr/sys/ofs/ofs file from install disk/tape ends up as /sbin/init.d/ofs. | Used to start and stop OFS. | ||
/sbin/rc0.d/K60ofs
/sbin/rc2.d/S20ofs |
Link
to /sbin/init.d/ofs
Link to /sbin/init.d/ofs |
Executed
at shutdown time.
Executed at startup time. |
||
Daemon and Executables | ||||
/usr/sbin/ofsd
/usr/sbin/jbd /usr/sys/ofs/update /usr/bin/tv |
Optical
file system daemon
Jukebox daemon (serial) File system update daemon Translate VMS text format to stdout stream. |
Used to start
and stop OFS.
The update daemon is an enhanced version of the standard UNIX update daemon, allowing the file system sync interval to be specified by a run-time parameter. |
||
OFS Headers | ||||
/usr/include/ofs/iblock.h
/usr/include/ofs/jbio.h /usr/include/ofs/odio.h /usr/include/ofs/ofsdir.h /usr/include/ofs/omount.h |
Index
block structure
Jukebox I/O control Optical I/O control Directory structure Mount data structure |
Defines the Optical File System structures, and may be included by programs that perform direct access to OFS volumes. | ||
OFS Driver and Related Files | ||||
/ofs/tab | Contains table of drive devices nodes | Driver object files that are incorporated into the DIGITAL UNIX kernel by a normal system configuration process | ||
/dev/odx | Device driver, replace x with a digit such as 0, 1, etc., for example, /dev/od0, /dev/od1 and so on. | |||
/dev/ofx | Device driver, replace x with a digit such as 0, 1, etc. | |||
/dev/nodx | Device driver, replace x with a digit such as 0, 1, etc., for example, /dev/nod0, /dev/nod1 and so on. | |||
/dev/jbx | Device driver for serial jukeboxes, replace x with a digit such as 0, 1, etc., for example, /dev/jb0 for jukebox 0 (the first jukebox), /dev/jb1 and so on. | |||
OFS Utility | ||||
/usr/sbin/jbc | Jukebox control utility | |||
/usr/sbin/jmount | Jukebox mount script | To mount all volumes. | ||
/usr/sbin/jbtalk | Jukebox communication | |||
/usr/sbin/ofile | Optical volume extraction | |||
/usr/sbin/ofind | Optical volume inspection | Used to examine optical volumes and restore missing or deleted files. | ||
/usr/sbin/ofsck | Optical file system check | Employed to verify the integrity of the file system on an unmounted optical disk volume. | ||
/usr/sbin/olink | Optical file recovery | Used to examine optical volumes and restore missing or deleted files. | ||
/usr/sbin/omap | Optical volume mapper | Used to inspect file indices on OFS disks or create file indices on disks imported from other file systems. | ||
/usr/sbin/omount | Optical volume mount | Used to mount optical volumes. Normally used for drives. | ||
/usr/sbin/over | Optical volume erase | Used to initialize optical volumes. | ||
OFS manual | ||||
/usr/mam/man4/ofs.4s
/usr/mam/man5/ofstab.5 /usr/man/man7/jbio.7 /usr/man/man8/jbc.8 /usr/man/man8/jbd.8 /usr/man/man8/jbtalk.8 /usr/man/man8/jmount.8 /usr/mam/man8/ofile.8 /usr/mam/man8/ofind.8 /usr/mam/man8/ofsck.8 /usr/mam/man8/ofsd.8 /usr/mam/man8/olink.8 /usr/mam/man8/omap.8 /usr/mam/man8/omount.8 /usr/mam/man8/over.8 /usr/mam/man8/update.8 |
Optical
file system
Optical volume mount list Jukebox I/O definition Jukebox control utility Jukebox daemon (serial) Jukebox communication Jukebox mount script Optical volume extraction Optical volume inspection Optical file system check Optical file system daemon Optical file recovery Optical volume mapper Optical volume mount Optical volume erase File system sync daemon |
Manual pages describing the features and functions of the optical disk file system and jukebox, and can be accessed online using the UNIX manual paging utility, man. | ||
OFS Config | ||||
/usr/sys/ofs/files
/usr/sys/ofs/config /usr/sys/ofs/table |
Configuration
files list
Configuration command script Jukebox configuration table |
Used interactively to configure OSMS and /vmunix build. | ||
/usr/sys/ofs/ofs.o
/usr/sys/ofs/optical.o /usr/sys/ofs/jukebox.o /usr/sys/ofs/config.file /usr/sys/ofs/ofs_data.c /usr/sys/ofs/stanza.static |
Optical
file system optical module
Optical disk drive optical module Optical jukebox optical module Configuration file Configuration file Configuration file |
/usr/sys/SYSTEMNAME/ofsdata.o is created from this file. |
||
CAM Header | ||||
/usr/include/io/cam/mcanger.h
/usr/include/io/cam/opdisk.h /usr/include/io/cam/scsi_changer.h /usr/include/io/cam/scsi_optical.h |
Media
changer
Optical disk Changer I/O Optical I/O |
|||
CAM Driver | ||||
/usr/sys/cam/cam_changer.o
usr/sys/cam/cam_optical.o |
Media
changer object module
Optical disk object module |
/usr/sys/SYSTEMNAME/cam_changer.o
is linked to this file.
/usr/sys/SYSTEMNAME/cam_optical.o is linked to this file. |
||
CAM config | ||||
/usr/sys/cam/files
/usr/sys/cam/cam_dev_desc.i /usr/sys/cam/cam_mode_page.1 /usr/sys/cam/cam_mode_sel.i |
Config
files list
cam_data fragment: device description cam_data fragment: mode page cam_data fragment: mode selection |
During the install these .i files (device description, mode page, and mode select files) from the installation disk/tape are inserted into the /usr/sys/data/cam_data.c file. Hence, they will not exist on the system as separate .i files after installation. | ||
OFS patches | ||||
/usr/sys/bsd/kern_lmf.o
usr/sys/vfs/vfs_syscalls.o |
lmf
authorization patch
read-only sync patch |
The kern_lmf.o file from disk/tape is initially copied into the /usr/sys/bsd directory, if necessary, and then moved to the /usr/sys/BINARY directory replacing the existing file. | ||
Release Notes | ||||
/usr/sys/ofs/notes | Release notes |
Using Optical Storage Management Software (OSMS)
This chapter describes the naming conventions for jukebox resources (such as jukeboxes, jukebox slots, jukebox drives, two sides of optical disks, and volume specifications) and how to reference optical disks.
Naming Convention for Jukebox Resources
The jukeboxes have number of resources. They have to be specified in the jbc utility. The jbc utility also displays the status of such resources to the users. The following naming and numbering conventions are used for the jukeboxes and their resources.
Example: jukebox numbers: 0, 1, 2, 3, ...
The jukeboxes are numbered as 0, 1, 2, 3, and so on. If there is only one jukebox in a system it will be referred to as 0; if there are two jukeboxes, they will be 0 and 1.
Example: jbc utility numbers and displays slots as: 0, 1, 2, 3, ...
The jbc utility numbers the slots as 0, 1, 2, 3 and so on. However, the jukebox hardware manufacturers number the slots as: 1, 2, 3, 4 and so on. If you are using the jukebox console buttons to insert or remove disks, instead of the jukebox utility, you should keep this difference in numbering in mind.
In all the examples below, we will use the convention that the jbc utility follows for the slot numbers which is 0, 1, 2, and so on.
Example: jbc utility numbers drives as: 1, 2, 3, 4, ...
Example: drive specification in jbc utility: 1d, 2d, 3d, 4d, ...
The jbc utility numbers the drives as 1, 2, 3, 4 and so on. The jukebox hardware manufacturers number the drives as 1, 2, 3, 4 and so on as well. In the jbc utility, the drives are specified with the letter d added to the drive numbers. For example, drive 1 will be specified as 1d, drive 2 as 2d and so on.
Naming Two Sides of an Optical Disk:
Example: one side is specified as: a
Example: other side is specified as: b
The jbc utility treats one side of the optical disk as a and the other side as b. This convention is similar to what the disk hardware manufacturers follow. On the disk cover, you will see uppercase A and B (instead of lowercase) engraved for the two sides of the disk. The jbc utility uses lowercase a and b instead of uppercase to refer to the two sides. If you specify A or B in the jbc utility, it will give the error "bad format".
Note
It is important to note that naming the sides is logical and not physical. In other words, if a user puts the side that is marked b on the disk and specify it as a in the jbc utility, the jbc utility will treat it as side a since there is no physical representation on the disk to verify whether a side is a or b. Hence, it is the users responsibility to ensure that proper side is placed in the slot and also specified in the jbc utility.
There are two sides to an optical disk. Each side is treated as a separate volume. Hence there are two volumes on a disk. Each volume is specified by the slot number and the side name together as shown below:
volume in slot 5 side a is: 5a
volume in slot 5 side b is: 5b
Anytime a volume is moved from a slot to a drive or from a drive to a jukebox, the volume should be properly specified with slot number and side as shown above; otherwise, the error message "bad format" will be displayed.
Once OSMS is installed, optical disk drives may be accessed using device names of the form: /dev/od#, where # is the unit number. OSMS can support up to eight optical disk drives in the current implementation.
If the OFS daemon is running, optical disk file systems may be activated with the jmount used for jukeboxes using the device names as follows:
dev/jb@/#?, where @ specifies the jukebox, # identifies a slot in the jukebox, and ? designates the media surface (a or b) For example, /dev/jb0/1a means jukebox 0, slot 1, and side a.
The omount utility can be used for drives utility using the device names as follows:
/dev/of#, where # is the unit number
For example, /dev/of0 is for drive 0 and /dev/of1 is for drive 1, and so on.
There is no limit to the number of file systems that OSMS can handle.
This section describes media types, how to load (insert) platters, and how to unload (eject) platters.
There are three types of media classified by their capacity as follows:
Type | Capacity |
1x | 600 megabytes (300 megabytes per side) |
2x | 1.2 gigabytes (600 megabytes per side) |
4x | 2.4 gigabytes (1.2 gigabytes per side) |
Note
Do not use a high-capacity disk on a low-capacity drive.
Loading (Inserting) Platters into Jukebox Slot
A jukebox has several slots. The number of slots in a jukebox varies. Each slot can hold an optical platter. Each platter has two sides. Each side is treated as a mountable volume. Platters must be placed into the slot in order to mount the volumes in it. Files in a mounted volume can only be accessed.
A platter can be loaded from the jukebox mailslot into a slot in the jukebox. The place where you insert the platter is called the mailslot. The loading can be done in one of two ways:
1. using jukebox console keypad
2. using the jbc utility
Note, it is not advisable to load a platter directly from the mailslot into a drive. It must be moved to a slot first and then to the drive.
Unloading (Ejecting) Platters into Jukebox
A platter can be unloaded (ejected) from a slot to the mailslot by using either of the following features:
using jukebox console keypad
using the jbc utility
Note
Never eject a platter while one or both of its volumes are mounted. Unmount volumes before ejecting the platter.
You have to be a superuser (root) in order to mount the optical volumes.
A platter can be moved from a slot to a drive and vice versa using the jbc utility. OSMS is not required to use the jukebox console. Refer to Chapter 6, Utility Descriptions for the description of the jbc utility. To find the details on how to use the jukebox keypad, refer to your jukebox manual.
The performance of your optical disk subsystem under OSMS is primarily dictated by your optical disk equipment. The overhead imposed by OSMS is minimal. However, there are some things you can do to enhance performance.
One of the most time-consuming operations on the optical disk is file lookup. This requires reading directories and locating the requested files within those directories. This process can be enhanced considerably by the judicious use of subdirectories.
With 10,000 files in a directory, it could take a long time to locate a particular file because of the number of files. Performance would be substantially enhanced if you create 100 subdirectories with 100 files in each subdirectory because only 100 files must be searched to locate the requested file.
All writes to files on the optical disk are buffered in memory. Data in the buffer is written to the optical disk when:
The buffer is filled
The file is closed
The optical disk is unmounted
A separate buffer is maintained for every open file, and the size of each buffer is controlled by a parameter that may be set when the volume is mounted. The size of these buffers can have a strong effect on the performance of OSMS in many cases. This is highly dependent on your application.
If only a few files are open at once, a large buffer size will reduce latency considerably. However, when many files are open, the memory space occupied by these buffers may dramatically increase swapping overhead.
File fragmentation occurs when file data are not located in contiguous blocks on the optical disk. This will require the optical disk to seek across discontinuities while reading the file. Since the optical disk cannot transfer data and seek at the same time, file fragmentation will have a negative impact on read performance.
The following are the most frequent reasons for file fragmentation on the optical disk:
Recovering from a System Crash
In the development of OSMS, recovery from a system crash was an important consideration. OSMS has been designed to minimize the impact of a system crash and the effort required to recover. The result is that the state of your optical disk is never worse (and is usually better) than that of the magnetic disks on your system.
If an optical disk is not properly unmounted (as might happen during a system crash), then any files that were being created or modified when the system crashed are indeterminate (that is, they may or may not be complete). All other files on the volume should be intact.
Files that were created or modified and successfully closed before the system crash are intact on the disk. However, their directory entries or file indices may not be up-to-date. Such files will be automatically restored when the file system is remounted after the crash.
Files that were created or modified but not successfully closed before a system crash or hardware failure have most likely lost data; that is, some data successfully written to the file has not been written to the optical disk. Such lost data is not recoverable by any method.
Installation and Configuration
This chapter provides the hardware and software installation, configuration, and reconfiguration procedures. Also included are the rebuilding and restarting procedures for the DIGITAL UNIX kernel, and the kit removal procedure.
OSMS is supplied on CD-ROM in kit format suitable for use with setld. Refer to the section in this chapter showing the installation example for installing the software. BE SURE TO REMOVE ANY PREVIOUS OSMS SOFTWARE BEFORE THE OSMS INSTALLATION.
In the course of installing the OSMS kit, setld will execute the configuration script supplied in the kit. The script will ask you questions about the physical configuration of your system, including the connection addresses of SCSI bus controllers, optical disk drives, and jukeboxes. If you do not know the answer to a question, call customer support for help.
You have to perform the following five steps while installing OSMS:
These five steps are explained in detail in the following sections. Read these sections before starting the installation process.
Note
If you want to cancel the installation for any reason, please do not terminate the installation by typing Ctrl-C while performing Step 2. Let it complete copying the files. You can terminate installation in Step 3 without configuring the system.
If you made an error in configuring the system, you can continue to Step 4 and build the kernel. Then perform the "Reconfiguring the OSMS Kit" procedure explained later and rebuild the kernel.
If you rebuilt the kernel and decided not to have OSMS, you can remove it by following the "Removing OSMS Kit" procedure explained later in this chapter.
Disk Space and Memory Requirements
Table 4-1 shows the amount of disk space and RAM memory required for the OSMS product. A block equals 512 bytes.
Table 4-1 Disk Space and Memory Requirements
Disk Space & RAM | Amount |
Disk space required for Alpha installation | 10,000 blocks (that is, 6 Megabytes) |
Disk space required for Alpha use | 20,000 blocks (that is, 10 Megabytes) |
RAM memory | 32 Megabytes minimum |
OSMS is installed on DIGITAL UNIX systems using the standard installation utility, setld. It is incorporated into the DIGITAL UNIX kernel by a normal system configuration and initiated at system startup time from the system startup command script, rc2.
Know Your System Configuration
Before installing OSMS, it is essential that you know the following information about your optical system configuration.
If you are superuser on the system, warn the users currently logged in on the system to log out and be certain that there are not other users logged in except superuser (root). In order to gather this information, you must halt the system by using the following command:
shutdown -h now
When the system halts, it displays the console prompt (>>>).
The following is an illustration of a system configuration:
Alpha system: | DEC2100 |
Number of SCSI ports: | 2 |
Operating system: | DIGITAL UNIX version 3.2C |
Jukebox model: | RW525 |
Drive in the jukebox: | RWZ52 |
Two hard drives: | RZ26L with 1.05 Gigabyte capacity |
CDROM drive: | RRD44 |
Tape drive: | TLZ06 |
Host adapter: | AHA1740A |
At this point, use the show devices command as follows:
>>>show dev
When you use the show dev command, the system displays the following:
Boot dev | Addr | Dev type | RM/FX | Dev nam | Rev | Num bytes |
DKA0 | A/0/0 | Disk | FX | RZ26L | 440C | 1.05GB |
DKA100 | A/1/0 | Disk | FX | RZ26L | 440C | 1.05GB |
JKA200 | A/2/0 | CHNGR | RM | RW525 | 2.17 | |
JKA300 | A/3/0 | OPDisk | RM | RWZ52 | 3404 | |
DKA400 | A/4/0 | RODisk | RM | RRD44 | 3593 | |
HOST | /7/0 | Proc | AHA1742A | G.2 | ||
MKB500 | B/5/0 | Tape | RM | TLZ06 | 0374 | |
HOST | B/7/0 | Proc | AHA1740A | G.2 |
For your convenience in reading this illustration, the jukebox information is highlighted. The CHNGR in the Dev type field means it is the media changer (reboot) in the jukebox RW525 and OPDisk means the optical drive either inside the jukebox or standalone. The addr field specifies the host adapter ID, target ID, and LUN. The adapter ID is displayed as A, B, C, and so on.
For OSMS configurations, A should be treated as 0, B as 1, C as 2 and so forth. For example, the address A/2/0 means jukebox 0 has the changer connected to host adapter ID 0, target ID 2, and LUN 0; the address A/3/0 means the drive in jukebox 0 is connected to adapter ID 0. target ID 3, and LUN 0; the address A/6/0 means the standalone optical drive is connected to adapter ID 0, target ID 6, and LUN 0.
Note that the jukebox is connected to the adapter AHA1742A and the tape drive is connected to the adapter AHA1740A.
Write down the output of the show dev command. Bring the system up by booting genvmunix as follows:
>>>boot -fi "genvmunix"
Log in as root and start the installation of OSMS. Note that if you have already built a vmunix with all the optical devices connected, then you could boot vmunix and install OSMS.
When the OSMS configuration script asks various questions, it usually presents a range of valid responses in parentheses. Questions ending in a question mark (?) expect a yes (y) or no (n) response, while those ending in a colon (:) require a numeric response. If a question is confusing, answering it with a question mark (?) will produce a description of the nature of the desired response.
For the Standalone Drive Configuration
To initiate the configuration procedure, the script first asks:
drives not in jb (0 to 8):
The expected response to this question is the number of standalone drives in the configuration that are not associated with any jukebox. For example, a response of 1 means there is one RWZ standalone drive connected to the system and 0 means no standalone drives are connected.
For each jukebox, the script asks:
jb0 model number:
An appropriate response to this question is the designation of a jukebox in your configuration.
If the jukebox is configured in only one way, its designation is normally its model number (for example, enter 525 for an RW525 jukebox).
If a jukebox is available in several different configurations, its designation is comprised of a specific prefix defining the model and a distinctive suffix defining the variant, such as a drive type or volume capacity.
Responding with a question mark (?) produces a list of recognized jukebox types, and a null response indicates no more jukeboxes to configure.
The script then asks:
jb0 SCSI host adapter id:
Enter the adapter ID that the jukebox is connected to. For example, the native SCSI is 0 and the other external SCSI controllers can be 1 or 2.
If a wrong number is entered, the script will not take the answer.
Next, the script asks:
jb0 SCSI target (0 to 6):
The response defines the SCSI address to which the first jukebox (jb0) is connected.
The script asks:
od# SCSI target (0 to 6):
The response defines the address of the drive in the jukebox on the SCSI bus. Each device on a SCSI bus must have a distinct device address.
The script then asks:
jb1 model number:
The response defines the SCSI address to which the second jukebox (jb1) is connected.
If there is no second jukebox, press the Enter key and the system will rebuild the kernel.
After jukebox configuration data has been entered, the script will ask:
Do you want to edit the configuration file? (y/n) [n]:n
If you answer no (n), the configuration script will execute the doconfig procedure to relink the DIGITAL UNIX kernel. If this procedure completes without error, OSMS has been installed successfully. Type n and press the Enter key.
Rebuilding the DIGITAL UNIX Kernel
After drive configuration data has been entered, the script will ask:
Do you want to edit the configuration file? (y/n) [n]: n
If you answer no (n), the configuration script will execute the doconfig procedure to relink the DIGITAL UNIX kernel. If the procedure completes without error, OSMS has been installed successfully.
The following example is a sample installation procedures and output that you see on the screen of your DIGITAL UNIX system for an RW525 jukebox with 1 RWZ52 optical drive.
In order for the OSMS optical file system software to work correctly the following subsets must be installed: OSMSMAN150, OSMSOSF150, OSMSVFS150, and OSMSCAM150. If you are using a CD-ROM for your installation, type the following.
# mkdir /mnt
# mount -dr /dev/rz4c /mnt ! Assume CDROM id # is 4
#
# cd /mnt
#
# ls
INSTCTRL OSMS.image OSMSCAM150 OSMSMAN150 OSMSOSF150 OSMSVFS150 instctrl
#
# setld -l .
*** Enter subset selections ***
The following subsets are mandatory and will be installed automatically unless you choose to exit without installing any subsets:
* File System Patches
* Optical File System
* SCSI Device Drivers
The subsets listed below are optional:
* Online Manual Pages
There may be more optional subsets than can be presented on a single screen. If this is the case, you can choose subsets screen by screen or all at once on the last screen. All of the choices you make will be collected for your confirmation before any subsets are installed.
1) Online Manual Pages
2) ALL mandatory and all optional subsets
3) MANDATORY subsets only
4) CANCEL selections and redisplay menus
5) EXIT without installing any subsets
Enter your choices or press RETURN to redisplay menus.
Choices (for example, 1 2 4-6): 2
You are installing the following mandatory subsets:
File System Patches
Optical File System
SCSI Device Drivers
You are installing the following optional subsets:
Online Manual Pages
Is this correct? (y/n): y
Checking file system space required to install selected subsets:
File system space checked OK.
SCSI Device Drivers
Copying from . (disk)
Verifying
Online Manual Pages
Copying from . (disk)
Verifying
File System Patches
Copying from . (disk)
Verifying
Optical File System
Copying from . (disk)
Verifying
Modifying /sys/include/io/cam/pdrv.h
Modifying /sys/include/io/common/devio.h
Modifying /sys/include/io/cam/cam_debug.h
Modifying /sys/io/cam/cam_config.c
Modifying /sys/data/cam_data.c
SCSI Device Drivers installed
Configuring "SCSI Device Drivers" (OSMSCAM150)
On-Line Manual Pages installed
Configuring "Online Manual Pages" (OSMSMAN150)
Installing vfs_syscalls.350
Installing update procedure
File System Patches installed
Configuring "File System Patches" (OSMSOSF150)
Modifying /etc/inittab
Modifying /sys/vfs/vfs_conf.c
Optical File System installed
Configuring "Optical File System" (OSMSVFS150)
*****
***** Optical File System
***** Release: V1.5
***** Subset: OSMSVFS150
***** Phase: CONFIGURE
***** Action: INSTALL
*****
Ready to configure the system to run Optical File System V1.5.
This procedure will modify several system files, compile the changer configuration data and rebuild the kernel. You will be asked questions about the SCSI bus configuration, such as the bus, target and unit numbers of changers and disk drives.
If you do not understand a question, enter ? to obtain help.
When the installation procedure is finished, you must reboot to activate the Optical File System.
Would you like to continue? (y/n): y
Compiling configuration data
drives not in jb[0-8]: 0
jb0 model number: 525
jb0 SCSI host adapter id: 0
jb0 SCSI target (0 to 6): 2
od0 SCSI target (0 to 6): 3
jb1 model number: Press the Enter Key here if there is no other jukebox.
The rest of the procedure may take 10
to 15 minutes to rebuild your kernel.
You will be asked whether or not you
want to edit the configuration file.
The default is no; take the default.
Starting kernel rebuild...
*** KERNEL CONFIGURATION AND BUILD PROCEDURE ***
Saving /sys/conf/BULLEE as /sys/conf/BULLEE.bck
Do you want to edit the configuration file? (y/n) [n]: n
*** PERFORMING KERNEL BUILD ***
Working....Tue Jan 14 11:50:10 EST 1997
Working....Tue Jan 14 11:52:11 EST 1997
Working....Tue Jan 14 11:54:11 EST 1997
The new kernel is /sys/BULLEE/vmunix
Disk space needed for vmunix is: 7242504 bytes.
Saving /vmunix as /vmunix.preOSMSVFS
Moving /sys/BULLEE/vmunix to /vmunix
*****
***** OSMSVFS150: CONFIGURE phase complete.
*****
The installation of OSMS is now complete.
Verifying OSMS Installed Correctly
To ensure that OSMS is installed correctly, check the installed subsets (OSMSCAM150, OSMSMAN150, OSMSOSF150, and OSMSVFS150) by using the setld utility with the i option as follows.
# setld -i | grep OSMS
OSMSCAM150 installed SCSI Device Drivers
OSMSMAN150 installed Online Manual Pages
OSMSOSF150 installed File System Patches
OSMSVFS150 installed Optical File System
#
All four subsets should be installed as shown above.
Registering OSMS Using LMF PAK
After installing the OSMS product, it is essential to register OSMS product using LMF (License Management Facility) in order to use it. An OSMS Product Authorization Key (PAK) is needed in order to register. You must have received it when the product has been purchased. Get the document that contains the PAK information.
The manual pages for registering any UNIX product using LMF can be found in the UNIX man pages for lmf and lmfsetup. To see the man pages, login to the system and type the following:
man lmf
man lmfsetup
The rest of this section describes how to register OSMS using lmfsetup utility and the information in the OSMS PAK.
First install OSMS product. Login as root. Then type
lmfsetup
The system will display the following message:
Register PAK (type q or quit to exit) [template]
Press Enter key. Then, it will ask you to type the following information:
Issuer:
Authorization Number:
Product Name:
Producer:
Number of Units:
Version:
Product Release Date:
Key Termination Date:
Availability Table Code:
Activity Table Code:
Key Options:
Product Token:
Hardware-Id:
Checksum:
If an information not supplied in the OSMS PAK is asked, simply press the Enter key. Once you have entered all the information, the lmfsetup utility will display whether OSMS has been registered successfully or not. If it not successful, try to register again because often times typing error occurs resulting in the failure of registration. If you cannot register after several tries, contact field service.
LMF automatically enables a license when you register it. After successful installation, the status of the OSMS PAK must be in the active status. To list the status of all register products in the system, type the following command
lmf list
For more information on lmf refer to man pages and/or documentation for lmf and lmfsetup.
At this point, shut down the system and reboot as described in the next section. Then proceed to the Verifying Optical Hardware Configuration section.
If the optical hardware is not connected, connect your optical hardware (after shutdown).
In order to shutdown the system, type the following:
# shutdown -h now
Shut down messages display.
Restarting the DIGITAL UNIX Kernel
Reboot the system by typing the following at the console prompt (>>>):
>>> boot
Verifying Optical Hardware Configuration
Use the following command to verify that the optical devices are registered in the system's error log file. The following example refers to RW516 jukebox installation only.
#
# uerf -R 300 | more
************************* ENTRY 1. *************************
----- EVENT INFORMATION -----
EVENT CLASS OPERATIONAL EVENT
OS EVENT TYPE 300. SYSTEM STARTUP
SEQUENCE NUMBER 0.
OPERATING SYSTEM DEC OSF/1
OCCURRED/LOGGED ON Tue Jan 14 13:15:18 1997
OCCURRED ON SYSTEM ntbk11
SYSTEM ID x00020006 CPU TYPE: DEC 2000
SYSTYPE x00000000
MESSAGE PCXAL keyboard, language English
_(American)
Alpha boot: available memory from
_0x71c000 to 0x5000000
Digital UNIX V3.2C (Rev. 148); Tue
_Jan 14 12:59:10 EST 1997
physical memory = 80.00 megabytes.
available memory = 72.89 megabytes.
using 299 buffers containing 2.33
_megabytes of memory
DEC2100 model A500MP system
Firmware revision: 1.2
PALcode: OSF version 1.32
ibus0 at nexus
ace0 at ibus0
gpc0 at ibus0
eisa0 at ibus0
vga0 at eisa0
1024x768 (QVision )
ln0 at eisa0
ln0: DEC LANCE Ethernet Interface,
_hardware address: 08-00-2B-BD-2A-22
aha0 at eisa0 slot 5
scsi0 at aha0
rz0 at scsi0 bus 0 target 0 lun 0 (DEC
_ RZ26L (C) DEC 440C)
rz1 at scsi0 bus 0 target 1 lun 0 (DEC
_ RZ26L (C) DEC 440C)
rz4 at scsi0 bus 0 target 4 lun 0 (DEC
_ RRD44 (C) DEC 3593)
mc16 at scsi0 unit 16 (DEC RW525
_ (C)DEC 2.17)
op24 at scsi0 unit 24 (DEC RWZ52
_ (C)DEC 3404)
aha1 at eisa0 slot 6
scsi1 at aha1
tz13 at scsi1 bus 1 target 5 lun 0
_(DEC TLZ06 (C)DEC 0374)
fdi0 at eisa0
fd0 at fdi0 unit 0
lp0 at ibus0
lvm0: configured.
lvm1: configured.
dli: configured
SuperLAT. Copyright 1993 Meridian
_Technology Corp. All rights
_reserved.
--more-- q
#
If the optical devices are not listed, then the OSMS software will not operate properly. Therefore, you need to do the following:
If the optical devices are listed, the OFS daemons (ofsd) should be running. Verify that the OFS daemons are running by performing the steps described in the next section.
Verifying the OFS Daemon is Running
To verify that the optical file system (ofs) daemon named, ofsd, is running, type the ps -e command as shown below.
# ps -e | grep ofs
59 ?? I 0:00.02 /usr/sbin/ofsd of0 /dev/od0
471 ttyp4 S + 0:00.01 grep ofs
There should be one ofsd daemon for each drive.
When OSMS boots, it automatically mounts volumes that are in the jukebox slots. By default, the volumes are mounted under the /jb0 directory for jukebox 0, /jb1 for jukebox 1, and so on. Under jb0, there will be one directory for each volume. For example, if there is a disk in slot 0, then the two mount points will be /jb0/0a, and /jb0/0b (one volume for each side of the disk). The devices used for slot 0 will be /dev/jb0/0a and /dev/jb0/0b. The current version of the Optical File System borrows file system index 4, and therefore, appears to users as the PC File System (pcfs). A mount line for slot 0 side a will appear as follows:
/dev/jb0/0a on /jb0/0a type pcfs (rw)
To check the mounted volumes in the jukebox, type the mount command as follows.
# mount
/dev/rz2a on / type ufs (rw)
/proc on /proc type procfs (rw)
/dev/rz2g on /usr type ufs (rw)
/dev/jb0/0a on /jb0/0a type pcfs (rw)
/dev/jb0/0b on /jb0/0b type pcfs (rw)
/dev/jb0/1a on /jb0/1a type pcfs (rw)
/dev/jb0/1b on /jb0/1b type pcfs (rw)
The above display shows that there are disks in slot 0 and slot 1.
To look at the volume capacity use the df command as follows.
# df
Filesystem | 512-blocks | Used | Avail | Capacity | Mounted on |
/dev/rz2a | 126462 | 76608 | 37206 | 67% | / |
/proc | 0 | 0 | 0 | 100% | /proc |
/dev/rz2g | 792124 | 461282 | 251628 | 65% | /usr |
/dev/jb0/0a | 576999 | 576999 | 0 | 100% | /jb0/0a |
/dev/jb0/0b | 576999 | 2 | 576421 | 0% | /jb0/0b |
/dev/jb0/1a | 576999 | 22 | 576421 | 0% | /jb0/1a |
/dev/jb0/1b | 576999 | 2 | 576421 | 0% | /jb0/1b |
#
#
Note
NEW platters show as full using the df command (for example: the /dev/jb0/0a device above); therefore, erase the new platter using the over command (described in Chapter 6). Be sure the platter is not mounted when the over command is used. Also, the over command is NOT applicable for the WORM platters.
At this point, you have verified that the OSMS software is configured correctly.
If you are able to write and read from the /jb0/0a mount point or any other OSMS mount points, then the OSMS installation was successful.
If you must reconfigure your kernel to recognize a different optical device configuration, invoke the configuration procedure manually by typing:
# setld -c OSMSVFS150 CONFIG
You will be executing the configuration procedure shown in the CD-ROM installation instructions.
This section shows how to remove the OSMS kit from the system.
#
# setld -i | grep OSMS
OSMSCAM150 installed SCSI Device Drivers
OSMSMAN150 installed Online Manual Pages
OSMSOSF150 installed File System Patches
OSMSVFS150 installed Optical File System
#
# setld -d OSMSCAM150 OSMSMAN150 OSMSOSF150 OSMSVFS150
Unmounting OFS filesystems
Stopping OFS daemons
Deleting "Optical File System" (OSMSVFS150).
Restoring /sys/vfs/vfs_conf.c
Restoring /etc/inittab
Do you want to purge control files? (y/n): y
Restoring /vmunix.preOSMSVFS as /vmunix
The Optical File System is no longer installed
Remember to rebuild the kernel with doconfig
to remove the Optical File System functionality
Deleting "File System Patches" (OSMSOSF150).
File System Patches no longer installed
Remember to rebuild the kernel with doconfig
to remove the File System Patches functionality
Deleting "Online Manual Pages" (OSMSMAN150).
On-Line Manual Pages no longer installed
Deleting "SCSI Device Drivers" (OSMSCAM150).
Restoring /sys/data/cam_data.c
Restoring /sys/io/cam/cam_config.c
Restoring /sys/include/io/cam/cam_debug.h
Restoring /sys/include/io/common/devio.h
Restoring /sys/include/io/cam/pdrv.h
SCSI Device Drivers no longer installed
Remember to rebuild the kernel with doconfig
to remove the SCSI Device Drivers functionality
#
#
# setld -i | grep OSMS
#
If there are no OSMS subsets left, then the OSMS kit has been removed.
Starting and Stopping Optical File System
OSMS has one optical file system daemon called, ofsd running for each drive in the jukeboxes as well as standalone drives. These daemons are automatically started when the system boots. Sometimes, an administrator may want to stop the optical file system daemons and start them later. A script file named, ofs has been provided to stop and start ofs daemons. It is installed in the directory /sbin/init.d.
In order to stop the daemons, invoke the script as follows:
/sbin/init.d/ofs stop
Before stopping the daemons, all the ofs file systems will be unmounted.
To start the ofs daemons, invoke the script as follows:
/sbin/init.d/ofs start
This will start the daemons first and then it will mount all the ofs volumes.
Unlike standard UNIX File Systems, which must be initialized by mkfs before they can be used to store files, OFS file systems require no initialization.
Whenever a virgin volume is mounted, a blank root directory is inferred, and all its attributes (owner, group, and access mods) are copied from the mount point. These default attributes may be altered after the volume is mounted, or selected in advance by changing the mount point attributes before mounting. Erasable volumes should be cleared before mounting to identify available space.
True WORM platters should not be cleared. Rewritable volumes can be initialized to either WORM (OFS_WORM) mode or Rewritable (OFS_POOL) mode.
In order to clear rewritable (erasable) volumes, use the over utility as described below.
Note
Use OD drive in the over utility and not OF drive.
Make sure the volumes (both sides of the platter) to be cleared are not mounted.
Move the volume to an empty (unused) drive.
It is preferable to stop the OFS daemons (ofsds) while clearing volumes.
The following options of over utility are used in the example:
-v
Display status and progress (verbose) messages.
-z
Clear all active blocks on the volume to zero.
-f
Force clearing the volume even though it may contain active nodes. If this option is not specified, the file system is checked to insure it has no active nodes other than its root directory
-a
Clear all blocks on the volume. If this option is not specified, only those blocks actually in use are cleared. This option assumes -f and -z options.
The following examples assumes that the volume is in drive 1 (device /dev/od0).
Clear all index blocks (needs no options)
over -v /dev/od0
The file system is checked to insure it has no active nodes other than its root directory. Volume containing active files will not be cleared. If any active nodes other than the root is discovered, the volume is not cleared. An index block is created for the free space pool. This allows the volume to be mounted in the rewritable mode, that is, OFS_POOL mode.
The command "over -v /dev/od0" gives the following message when active blocks are on the volume.
over: root directory on /dev/od0 has active contents.
Force clearing all index blocks
over -fv /dev/od0
The index blocks will be cleared even if it contains active blocks. The mode (format) is set to OFS_POOL.
Clear all active blocks to zero
over -zv /dev/od0
An all zero pattern is written in active blocks on the volume. This allows the rewritable volume to be mounted in the write-once mode, that is, OFS_WORM mode.
The command "over -zv /dev/od0" gives the following message when active blocks are on the volume.
over: root directory on /dev/od0 has active contents.
Force clearing all active blocks
over -zfv /dev/od0
The mode is set to OFS_WORM.
Clear all blocks on the volume
over -av /dev/od0
The a option assumes -f and -z option automatically. The mode is set to OFS_WORM.
Optical volumes must be mounted using the jmount or omount utility the same way UNIX File Systems are mounted with the standard mount utility. Most of the option switches recognized by mount are available in omount as well.
In order to mount optical volumes, two utilities, jmount and omount are provided. They are explained briefly below. For more details, refer to Chapter 6, Utility Descriptions.
Note
Do not use mount command to mount optical volumes.
Do not specify OD devices in the omount; use OF device.
Without /etc/ofstab file
The following discussion assumes that the administrator did not create /etc/ofstab file. When the UNIX operating system boots, the OSMS is automatically started by executing the startup script from /sbin/rc2.d/S20ofs. The file S20ofs is a softlink to the script file /sbin/init.d/ofs. This script file starts OSMS daemons ofsd, one for each drive in the system. After starting the daemons, the script mounts all the volumes in the jukebox. The volumes are mounted under the directory /jb0 for jukebox 0, /jb1 for jukebox 1 and so on. Each platter has two sides, named a and b. Each side is treated as a volume. One subdirectory is created for each volume under the directory jb0 for jukebox 0. The subdirectory names are created using the jukebox slot number and the side name. For example, if slot 1 has a platter then two subdirectories named 1a and 1b will be created under /jb0 and the volumes will be mounted there.
In order to find out the mounted volumes, use the "mount" command. A sample output of the mount command after system boot is given below:
# mount
/dev/rz2a on / type ufs (rw)
/proc on /proc type procfs (rw)
/dev/rz2g on /usr type ufs (rw)
/dev/jb0/0a on /jb0/0a type pcfs (rw)
/dev/jb0/0b on /jb0/0b type pcfs (rw)
/dev/jb0/1a on /jb0/1a type pcfs (rw)
/dev/jb0/1b on /jb0/1b type pcfs (rw)
The above output shows that slot 0 and slot 1 have platters.
With /etc/ofstab file
The administrator can make OSMS to mount selected optical volumes automatically at system boot time by creating the file /etc/ofstab. The information about the volumes to be mounted are to be specified in this file. When the ofstab file exist, the startup script /sbin/init.d/ofs will only mount the volumes specified in the ofstab file. If the jukebox contains more platters than what is specified in the ofstab, the excess platters will not be mounted.
A sample content of the ofstab file is shown below:
/dev/jb0/0a /jb0/0a
/dev/jb0/0b /jb0/0b
/dev/jb0/1a /jb0/1a
/dev/jb0/1b /jb0/1b
For more details, refer to the manpages for ofstab and to Chapter 6, Utility Descriptions.
Volumes can be mounted manually by using the utilities jmount and omount. They are explained below
Mounting Using jmount Utility
jmount is a script file. It can be used to mount all volumes in a jukebox. For example, if the system has one jukebox with platters in slot 0, and 1. All the four volumes (0a, 0b, 1a, and 1b) can be mounted under the directory /jb0 by issuing the following command:
jmount /jb0
Issuing the mount command after the above jmount command, will give the following output:
# mount
/dev/rz2a on / type ufs (rw)
/proc on /proc type procfs (rw)
/dev/rz2g on /usr type ufs (rw)
/dev/jb0/0a on /jb0/0a type pcfs (rw)
/dev/jb0/0b on /jb0/0b type pcfs (rw)
/dev/jb0/1a on /jb0/1a type pcfs (rw)
/dev/jb0/1b on /jb0/1b type pcfs (rw)
For more information regarding jmount, see Chapter 6.
Mounting with omount Utility
The utility omount can also be used to mount volumes. The jb pseudo device and the mount point are to be specified as the arguments for the omount utility. The following examples show the use of omount:
omount /dev/jb0/0a /jb0/0a
omount /dev/jb0/1b /jb0/1b
For more information regarding omount, see Chapter 6.
Once a volume has been mounted, you may access the optical disk by using any of the standard UNIX utilities or system calls from your own programs to read from and write to the optical volume.
Note
You have to be a superuser (root) in order to mount the optical volumes.
It is important that the optical disk be properly unmounted if any write operations have been performed. This is because directory and file index information is held in a temporary buffer to optimize performance. This information is written to the optical disk when the disk is unmounted. Unmount an optical disk with the standard umount utility just as you would with any UNIX File System.
Each volume can be separately unmounted. For example, to unmount the volume 1a mounted at /jb0/1a, specify the umount command in one of the two following ways:
umount /jb0/1a
umount /dev/jb0/1a
All optical volumes can be unmounted by the following command:
umount -At pcfs
Refer to man pages on umount for more details on unmounting volumes.
Note
You have to be a superuser (root) in order to unmount the optical volumes.
Restrictions and Unsupported Utilities
This chapter describes the restrictions and unsupported utilities that apply to the current release of OSMS under the DIGITAL UNIX operating system.
OSMS does not maintain file access times on read access to WORM volumes. This is done to avoid having to update the file index whenever a file is accessed, which would be prohibitively wasteful of optical disk space. Access time is only updated in conjunction with other modifications.
The current release of OSMS maintains accurate link counts only for directories on rewritable volumes. This is done primarily as an optimization to avoid having to update the file index whenever a link is made to a file. File link counts on WORM volumes may be updated as an option in a future release.
The current release of OSMS does not support the use of disk quotas on an Optical File System (OFS). Disk quota support may be added in a future release.
The current release of OSMS does not furnish a method to attach any sort of label to an Optical File System (OFS). Some form of labeling may be provided in a future release.
The following subsections describe the DIGITAL UNIX utilities that cannot be used with OSMS.
The format utility operates directly on the raw disk device, performing various functions such as integrity checks. This function is inappropriate on optical volumes because they normally preformatted at the factory, so no formatting is necessary.
The mkfs utility creates a UNIX File System directly on the raw disk device and is not applicable to optical volumes. Erasable optical volumes should be cleared with the over utility before mounting. WORM optical volumes do not require any advance preparation to be used for file storage. Insert a virgin WORM volume in the drive and mount it with the jmount or omount utility.
The tunefs utility modifies various parameters of a UNIX File System, which are maintained in a special structure called the superblock for which there is no analog on OSMS optical volumes. OSMS operating parameters are specified as command line switches.
The fsck utility verifies a UNIX File System directly on the raw disk device, bypassing the file system completely. If you suspect there is a problem with the structure of OSMS optical volumes, the ofsck utility should be used to diagnose and correct the problem.
DIGITAL UNIX disk quotas are not supported under OSMS. Therefore the following disk quota commands are not supported by the OSMS software: edquota(8), quotacheck(8), quotaon(8)/quotaoff(8), and repquota(8). Functions: quotactl(2), and setquota(2).
This utility is used to create, modify, and read a disk label on a disk pack. The disklabel contains partition information. At this time, each side of an optical platter cannot be divided into multiple disk partitions. The disklabel utility is associated with the UNIX File System; the Optical File System in OSMS does not support the disklabel on optical platters.
The optical devices supported by OSMS do not recognize the DIGITAL UNIX vet utility.
The diskx utility is associated with the UNIX File System; the Optical File System in OSMS does not support diskx on optical platters.
This section identifies the known restrictions of OSMS V1.5 software on DIGITAL UNIX 3.x.
The first action requested in a jukebox following a bus reset or magazine access returns success without selecting the correct volume.
When a SCSI bus reset occurs due to a drive command timeout in a jukebox, the reset recovery logic in the cam_optical driver fails to complete, leaving the drive inaccessible.
The current version of the Optical File System does NOT support the dump/restore command. The tar command can be used to back up files instead.
The current version of the Optical File System borrows file system index 4, and therefore, appears to users as pcfs. This problem will prevent you from mounting a pcfs on the same system on which OSMS is running.
It is not possible to run an executable file from an optical disk.
If RV700 jukeboxes are taken offline during a copy command, the system will hang and require a reboot.
Executing omount of an empty slot completes successfully, but when the platter is accessed, a "not found" error is reported.
If the same device is configured and attached by several drivers, the system will crash.
When a user process hangs, the OSMS software will not change platters in the optical drive. When this problem occurs, all users accessing any other platter in the jukebox are blocked. At this time, there is no workaround for this problem but to reboot the system.
This chapter describes each of the OSMS utilities and their related command option switches. Table 6-1 lists and describes the OSMS utilities.
Table 6-1 Utility Descriptions
Utility | Description |
Jukebox Control Utility (jbc) | Interactively jukebox status and initiates jukebox functions. |
Jukebox Daemon (jbd) | Performs operations requested by the OFS driver or the jukebox control utility on the jukebox connected to the serial port. It is normally initiated from the rc2 command script at system startup time. |
Jukebox Mount Utility (jmount) | Command script supplied with OSMS to mount all volumes in the jukebox under a common directory. It invokes jbc to determine the names of all volumes present in the jukebox, creates mount-point directories as required, and invokes omount to mount the volumes. |
Jukebox Talk Utility (jbtalk) | Communicates directly through the specified port. It is used by maintenance personnel for jukebox alignment and general maintenance. |
Optical File System Check Utility (ofsck) | Supplied with OSMS to verify the integrity of optical disk file systems. It is analogous to the standard UNIX file system check utility (fsck) but performs its functions only on devices or files containing instances of the OFS. |
Optical File System Daemon (ofsd) | Performs Virtual File System operations supplied to it through the OFS driver on optical disk volumes accessed through the optical driver (OD). It is normally initiated from the rc2 command script at system startup time |
Optical File System File Access (ofile) | Extracts the content of files in an optical file system |
Optical File System Find Utility (ofind) | Supplied with OSMS to locate missing or deleted files on optical disk volumes. |
Optical File System Index Map Utility (omap) | Extracts or constructs index records in an optical file system. |
Optical File System Link Utility (olink) | Command script supplied with OSMS to restore missing or deleted files on optical disk volumes. The file system to be examined is selected by specifying the mount point directory, and the files to be restored are specified by full pathnames relative to the mount point. |
Optical File System Mounting Table (ofstab) | Contains entries for optical file systems to mount using the omount command. |
Optical File System Mount Utility (omount) | Supplied with OSMS to associate an optical disk file system with a directory node in another mounted file system, called the mount point. |
Optical File System Over Utility (over) | Employed to clear a rewritable optical disk volume for use with OSMS. OSMS employs a similar file structure on WORM and rewritable volumes. |
Optical File System Update Daemon (update) | Enhanced version of the standard UNIX File System sync utility, update. The version supplied with OSMS is the same as the standard version with the added facility of specifying the file system sync interval in seconds. |
Synopsis
jbc unit [command]
jbc is a jukebox manipulation tool used to display jukebox status and initiate jukebox functions. unit selects the jukebox by name or number. If the command used to invoke jbc has a numeric component, that component serves as a unit number to select the jukebox in lieu of unit. The default jukebox name is /dev/jb. command optionally specifies a jukebox operation to be performed.
Description
The jukebox control utility (jbc) interactively manipulates jukebox resources (volumes, slots, and drives) in response to user requests. It is normally initiated by an administrator to perform administrative functions (inserting or removing volumes), or to assist in resolving certain error conditions on jukeboxes operated through a serial control port. Whenever an error occurs on a jukebox, the user task that encountered the error is suspended until jbc is run to correct the problem.
When jbc exits or the error log is cleared, user tasks suspended due to jukebox errors are restarted, and the jukebox requests that caused them to be suspended are repeated. This level of control is not available on jukeboxes operated directly through the SCSI bus.
With the move command, jukebox resources (volumes, slots, and devices) are specified by number and type. Types a and b designate the A or B side of optical volumes, type d designates devices, the access port is device zero, and all other devices are optical disk drives.
With the name and show commands, the resource type may be volumes, slots, or devices. Command elements may be abbreviated to a single letter, and spaces between command elements are optional.
Options
If command is supplied, jbc will perform only the specified operation and terminate. If command is omitted, jbc accepts commands from its standard input and displays responses on its standard output. jbc displays its name and colon (:) when a command may be entered. The jbc utility accepts the following control option switches:
clear:
Clear the error log file and restart any operations suspended due to errors.
delay <seconds>:
Display or change the time, in seconds, for the access port time-out.
eject <slot>:
Transfer a volume from a the resource designated by slot to the access port. Equivalent to: move <slot> 0d.
flush:
Return volumes held in the carrier to the "home" slots.
get from to:
Fetch a volume from the resource designated by from into the carrier designated by to.
help:
Display a brief summary of valid commands.
insert <slot>:
Transfer a volume from the access port to the resources designated by slot. Equivalent to: move 0d <slot>.
list:
Display the contents of the jukebox error log /i/log.
move <from> <to>:
Transfer a volume from the resource designated by from to the resource designated by to (or its "home" slot, if to is omitted).
move <drive>:
Move a volume from a drive to its home slot (the slot it last occupied before being moved to a drive).
name <type>:
Display the names of jukebox resources of the specified type.
put <from> <to>:
Store the volume in the carrier designated by from into the resource designated by to.
option mask:
Display or change the option mask. Individual bits in the option mask manipulate special features of the jukebox control utility or daemon. Most option bits currently activate diagnostic messages in the daemon.
quit:
Perform the clear function, then terminate jbc.
reset:
Initialize the jukebox status map.
show <type>:
Display the status of jukebox resources of the specified type.
try <times>:
Display or change the retry count.
view:
Display the status of all jukebox resources (devices and slots).
The move command is the most powerful and versatile command. It accepts one or two resources and transfers a volume from one resource to another. Some jukeboxes operated through the SCSI bus may not accept all combinations of resources (refer to the help command list).
Depending on the types of resources, the following actions may occur:
If only one resource is specified, it must be a drive to which a volume had been moved at some earlier time. This volume is returned to the slot it last occupied before it was moved to a drive. This slot is called its home slot.
If two resources are specified and the second is a drive containing a volume, the volume in this drive is also returned to its home slot.
If two resources are specified and either the second is a slot containing a volume or the first is an empty slot or drive, an error is indicated and no move occurs.
If the first resource is a drive containing a volume and the second is an empty drive, the volume is moved to the second drive without changing its home slot.
If the first resource is a slot or drive containing a volume and the second resource is an empty slot, the volume is moved to the empty slot, which becomes its home slot.
If the second resource is a drive and the first is the home slot of a volume in a drive, the volume is moved to the specified drive without changing its home slot.
If the second resource is an empty slot and the first is the home slot of a volume in a drive, the volume is not moved, but the specified slot becomes its home slot.
If the first resource is the access port and the second resource is an empty slot, the volume inserted in the access port is moved to the specified slot.
If the second resource is the access port and the first is either a resource containing a volume or the home slot of a volume, the volume is moved to the access port.
The orientation of a volume in a slot is its normal orientation. The orientation of a volume in a device is either normal or inverted depending on the type used to specify the volume when it is moved to a device. When a volume that is:
Designated with type a is moved to a device, its orientation remains normal
Designated with type b is moved to a device, its orientation becomes inverted
Moved from one device to another, its orientation is preserved
Moved from a device to its home slot, its orientation is restored
Moved from a device to a slot designated with type a, its orientation remains unchanged
Moved from a device to a slot designated with type b, its orientation becomes inverted
Moved between slots designated with different types, its orientation becomes inverted
jbc Utility Examples
The jukebox utility, jbc can be used either interactively or as one time command. In the interactive session, once the utility is invoked, the jbc utility displays the prompt, "jbc: " and waits for the user to give jbc commands. The user can specify a command, the jbc will execute it and display the jbc prompt. This interactive session can be continued until quit from the utility which will take the user to the UNIX shell prompt. The examples below illustrates both form of using jbc.
Use of jbc in an Interactive Session
The following examples illustrate how to invoke jbc to get into interactive mode and then issue a series of jbc commands one at a time.
Invoking jbc in an interactive session
To invoke jbc for interactive session for jukebox 0
jbc 0
Note: All jbc commands issued from now until quit will be for jukebox 0 only.
Inserting a disk
To insert a disk into slot 0
i 0
Note, you have to place the disk in the jukebox mailslot (slot used to insert the disk) before issuing the insert command. In some jukeboxes, you may need to use the jukebox console buttons to insert a disk in the mailslot. Without the disk in the mailslot, the jbc utility will dipole "i/o error" message.
Ejecting a disk
To eject the disk from slot 0
e 0
Viewing jukebox status
To view the status of jukebox (jukebox 0, if jbc 0 is issued to enter into interactive session)
v
Moving disks
To move side a of the disk in slot 0 to drive 1
m 0a 1d
To move disk from drive 1 to its home slot (slot occupied by the disk before moving it into the drive)
m 1d
To move disk from drive 1 to slot 5 with side a
m 1d 5a
Displaying jukebox resources
To display selected jukebox resource names such as drive, slots and volume:
(1) display drive names: n d
(2) display slot names: n s
(3) display volume names: n v
Display status of selected jukebox resources such as drive, slots and volume:
(1) display drive status: s d
(2) display slot status: s s
(3) display volume status: s v
Listing jbc commands
To list the set of jbc commands (like help): can use help or ?
(1) ?
(2) h
Exiting from an interactive session
To exit from jbc interactive session
q
Use of jbc in a Non-interactive Session
To use jbc utility in a non-interactive form, all the information must be specified as arguments to the jbc command. Several examples are shown below using jukebox 0.
Inserting a disk
To insert a disk into slot 0
jbc 0 i 0
Note, you have to place the disk in the jukebox mailslot before issuing the insert command. In some jukeboxes, you may need to use the jukebox console buttons to insert a disk in the mailslot.
Ejecting a disk
To eject the disk from slot 0
jbc 0 e 0
Viewing jukebox status
To view the status of jukebox (jukebox 0, if jbc 0 is issued to enter into interactive session)
jbc 0 v
Moving disks
To move side a of the disk in slot 0 to drive 1
jbc 0 m 0a 1d
To move disk from drive 1 to its home slot (slot occupied by the disk before moving it into the drive)
jbc 0 m 1d
To move disk from drive 1 to slot 5 with side a
jbc 0 m 1d 5a
Displaying jukebox resources
To display selected jukebox resource names such as drive, slots and volume:
(1) display drive names: jbc 0 n d
(2) display slot names: jbc 0 n s
(3) display volume names: jbc 0 n v
Display status of selected jukebox resources such as drive, slots and volume:
(1) display drive status: jbc 0 s d
(2) display slot status: jbc 0 s s
(3) display volume status: jbc 0 s v
Listing jbc commands
To list the set of jbc commands (like help): can use help or ?
(1) jbc 0 ?
(2) jbc 0 h
jbc Command Illustration with Output (Non-interactive)
The jukebox RW525 has been used for the jbc command illustrations shown below.
The use of the "view" option in the jbc command and its output is shown below. Notice that all 16 slots (numbered from 0 to 15) are empty (indicated by -----) and one drive (numbered 1) is empty too.
# jbc 0 v
jb0 map: --- empty >>> enter <<< eject +++ full ->drive<- busy ??? fault
drive ..1..
port ----- -----
slot ..0.. ..1.. ..2.. ..3.. ..4.. ..5.. ..6.. ..7.. ..8.. ..9..
..0: ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
.10: ----- ----- ----- ----- ----- -----
The command to insert a disk into slot 0 using the "insert" option and the output of "view" after the insertion are shown below. Notice that slot 0 is full (indicated by +++++). Also note that the disk was put in the mailslot before the jbc command has been issued.
# jbc 0 i 0
# jbc 0 v
jb0 map: --- empty >>> enter <<< eject +++ full ->drive<- busy ??? fault
drive ..1..
port ----- -----
slot ..0.. ..1.. ..2.. ..3.. ..4.. ..5.. ..6.. ..7.. ..8.. ..9..
..0: +++++ ----- ----- ----- ----- ----- ----- ----- ----- -----
.10: ----- ----- ----- ----- ----- -----
The command to move side 'a' of the disk in slot 0 to drive 1 and the output of "view" after move are shown below. Notice that the drive 1 shows ++0A+ which means side 'a' of the disk from slot 0 is in drive 1.
#jbc 0 m 0a 1d
#jbc 0 v
jb0 map: --- empty >>> enter <<< eject +++ full ->drive<- busy ??? fault
drive ..1..
port ----- ++0A+
slot ..0.. ..1.. ..2.. ..3.. ..4.. ..5.. ..6.. ..7.. ..8.. ..9..
..0: ->1<- ----- ----- ----- ----- ----- ----- ----- ----- -----
.10: ----- ----- ----- ----- ----- -----
The use of the "name" option to display drive, slot and volume information and their output are shown below:
# jbc 0 n d
1
The number 1 above, indicates one drive numbered 1.
# jbc 0 n s
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
The above output shows 10 slots in the jukebox numbered from 0 to 15.
# jbc 0 n v
0
The 0 in the above output shows volume 0.
The use of "show" status option (indicated by s) for drive (d) and its output are shown below. The output shows that drive 1 contains the volume from slot 0 side 'a' in it and the device driver for drive 1 is /dev/od0.
# jbc 0 s d
status:
drive 1 slot 0 side A node /dev/od0
The use of "show" status option (indicated by s) for slot (s) and its output are shown below. The output shows the status of all the slots. Slots 1 to 15 are empty. The volume in slot 0 is in drive 1.
# jbc 0 s s
status:
slot 0 drive 1 slot 1 empty slot 2 empty slot 3 empty
slot 4 empty slot 5 empty slot 6 empty slot 7 empty
slot 8 empty slot 9 empty slot 10 empty slot 11 empty
slot 12 empty slot 13 empty slot 14 empty slot 15 empty
The use of "show" status option (indicated by s) for volume (v) and its output are shown below. The output shows that the volume from slot 0 with side 'a' is in the drive 1 and the device driver for drive 1 is /dev/od0.
# jbc 0 s v
status:
drive 1 slot 0 side A node /dev/od0
The command to move the volume from drive 1 to slot 0 with side 'a' and the output of view after moving are shown below.
# jbc 0 m 1d 0a
# jbc 0 v
jb0 map: --- empty >>> enter <<< eject +++ full ->drive<- busy ??? fault
drive ..1..
port ----- -----
slot ..0.. ..1.. ..2.. ..3.. ..4.. ..5.. ..6.. ..7.. ..8.. ..9..
..0: +++++ ----- ----- ----- ----- ----- ----- ----- ----- -----
.10: ----- ----- ----- ----- ----- -----
The command to eject the disk from slot 0 and the output of view after eject are shown below. Notice that when the eject operation is completed, the disk will show at the mail slot. Also notice that the view option shows that the jukebox slots and drive are empty.
# jbc 0 e 0
#jbc 0 v
jb0 map: --- empty >>> enter <<< eject +++ full ->drive<- busy ??? fault
drive ..1..
port <<<<< -----
slot ..0.. ..1.. ..2.. ..3.. ..4.. ..5.. ..6.. ..7.. ..8.. ..9..
..0: ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
.10: ----- ----- ----- ----- ----- -----
Synopsis
jbd unit [port]
jbd accepts jukebox operations from the ofs or jbc to be performed on an optical disk jukebox controlled by means of a serial I/O channel. unit selects the jukebox by name or number. If the command used to invoke jbd has a numeric component, that component serves as a unit number to select the jukebox in lieu of unit. The default jukebox name is /dev/jb. port specifies the serial I/O port used to communicate with the jukebox. The default communications port is /dev/tty00.
Description
The jukebox daemon (jbd) is furnished only for jukeboxes operated through a serial control port and performs operations requested by the OFS driver or the jukebox control utility on the jukebox connected to the specified port. It is normally initiated from the rc2 command script at system startup time.
Jukebox Mount Utility (jmount)
Synopsis
jmount [unit] [-a] [options] prefix|master
jmount is a command script to mount all optical volumes in a jukebox at standardized points in a master directory. unit selects the jukebox by name or number. If master has a numeric component, that component serves a s a unit number to select the jukebox in lieu of unit. Otherwise jmount appends the number of each jukebox device node found in the /dev directory to prefix to form a master directory name for each jukebox.
jmount interrogates the jukebox control utility (jbc) to determine the slot numbers of all volumes installed, creates a mount point directory for each volume named by slot number and side (a or b) in the master directory for the jukebox and mounts each volume on it corresponding directory using the optical disk file system mount utility (omount). Any options not recognized by jmount are passed on as parameters to omount.
Description
The jukebox mount utility (jmount) is a command script supplied with OSMS to mount all volumes in the jukebox under a common directory. It invokes jbc to determine the names of all volumes present in the jukebox, creates mount-point directories as required, and invokes omount to mount the volumes. Mount-point directories are named by slot number and side (a or b) of the volumes. Any additional parameters supplied are passed to omount as options.
Options
The ofsck utility accepts the following control option switches:
-a (append):
Write omount parameter strings on the standard output rather than passing them to omount. This option is useful for constructing the /etc/ofstab file.
See Also
jbc(8), jvm(8), omount(8), mount(8)
Synopsis
jbtalk [port]
Description
The jukebox talk utility (jbtalk) is useful only for jukeboxes operated through a serial control port and communicates directly with the jukebox through the specified port. Effective use of the utility requires knowledge of the native command structure accepted by the jukebox. The utility is typically used by maintenance personnel for jukebox alignment and general maintenance.
Optical File System Check Utility (ofsck)
Synopsis
ofsck [-is] [-b size] [-f size] [-l length] [-q quota] [-r [date]] [-volume] od [... od]
Description
The OFS check utility (ofsck) is supplied with OSMS to verify the integrity of optical disk file systems. It is analogous to the standard UNIX file system check utility (fsck) but performs its functions only on devices or files containing instances of the OFS.
The file system to be checked is selected by specifying the appropriate optical driver node, such as: /dev/od0. Optical file systems should not be mounted while being checked.
Options
The ofsck utility accepts the following control option switches:
-b <size> (block size):
Set the file system block size in bytes to the following numeric value. This number must be a power of 2 not less than 256. The true block size is sensed automatically whenever an optical volume is mounted, so this option is used only for testing. The default block size is 512 bytes.
-c (confirm):
Infer an affirmative response to every inquiry. Attempt to correct every fault discovered, if possible. Normally inquires interactively whether to perform any repair on the file system
-f <size> (fake):
Emulate an optical file system in an ordinary file. The following numeric value is the file size in megabytes. Selects the file to be used, which must already exist. The default file size is 1 megabyte.
-i (invert):
Reverse the byte order in numeric values when writing file index and directory entries. This option may be specified to maintain consistency on file systems imported from machines employing a different byte order in numeric values. It only controls the byte order written; index records and directories recorded in either byte order will always be interpreted correctly.
-l <length> (buffer length):
Set the file buffer length in blocks to the following numeric value. To optimize access latency, physical data transfers from the optical storage media are accumulated and performed several blocks at a time. This parameter may be varied to take optimal advantage of the characteristics of various optical disk drives. The default buffer length is 96 kilobytes.
-n (no):
Infer a negative response to every inquiry. Provided for compatibility with fsck; same as the read-only switch.
-q <quota> (fault quota):
Set the maximum consecutive error count to the following numeric value. This is the maximum number of contiguous errors accepted without producing a fatal I/O fault. If this value is zero, no error quota is imposed. The default error quota is 24.
-r (read-only):
Check the file system read only. Report but do not attempt to correct any faults. Physically write-protected volumes are always checked read-only whether this option is specified or not.
The following value, if supplied, represents a time in the past to which the file system should regress. All files and directories on the volume will appear exactly as they were at that instant in time. More recent files and changes will disappear, and detected or altered files will be restored to their previous state. The regression time is specified as a single number in any of the forms:
DD
MMDD
MMDDYY
MMDDhhmm
MMDDYYhhmm
where YY represents the year, MM represents the month, and DD represents the day, and hh represents the hour and mm represents the minute. An initial zero may be omitted. If the month or year is not specified, the most recent date is selected. If the hour or minute is not specified, zero is assumed.
-s (slow-check):
Access every index block while checking; do not refer to the index list. The normal fast sequence locates file index records using an index map. If any record cannot be read, the corresponding file cannot be accessed. Using the slow option allows an earlier version of such an index record to be identified.
-y (yes):
Infer a positive response to every inquiry. Provided for compatibility with fsck; same as the confirm switch.
-#:
Specify the volume reference number. This informs ofsck where the volume was previously mounted so it can find the volume index map and cache files. The volume reference number is the minor device number of the OFS device node used to mount the volume.
Examples
To verify an optical drive: ofsck /dev/od0
To verify a fake file system: ofsck -f20 file
Files
/ofs/#<volume> optical disk volume index map.
/ofs/$<volume> optical disk volume index cache.
See Also
ofs(4), ofsd(8), omount(8)
Optical File System Daemon (ofsd)
Synopsis
ofsd [-cikrsuz] [-b size] [-f size] [-h hold] [-l length] [-p pad] [-q quota] [-t time] ofs od
Description
The Optical File System daemon (ofsd) performs Virtual File System operations supplied to it through the OFS driver on optical disk volumes accessed through the optical driver (OD). It is normally initiated from the rc2 command script at system startup time, and serves to correlate an optical disk drive with the appropriate OFS driver node. It must be active before omount is invoked to mount an Optical File System.
The ofsd uses a private cache file to buffer index blocks for active files on an optical volume. Each record in this file is the length of a block on the file system, and one record is needed for each file that has been altered but not yet flushed (that is, files currently open for writing and directories that have recently been modified). This file is only used to restore index blocks when remounting a volume after a system crash.
When an optical volume is mounted, the file index blocks are scanned and an index map is constructed showing the location of the index block for each active file. The index map is updated as new file index blocks are generated. When the volume is demounted, the index map is preserved in a map file for the volume. If this file is present when the volume is remounted, the index map is fetched from the map file to expedite the mount process.
Options
The ofsd accepts the following control option switches:
-b <size> (block size):
Set the file system block size in bytes to the following numeric value. The following value is the logical block size in bytes. This number must be a power of 2 not less than 128. The true block size is sensed automatically whenever an optical volume is mounted, so this option is used only for testing. The default block size is 512 bytes.
-c (consistency):
Ensure file consistency by flushing index records to disk when files are closed. If this option is not specified, index records are held in memory for a time after files are closed to facilitate attribute updates. If this option is specified, files written by archive programs such as tar (that alter file attributes after files are closed) will have redundant index records.
-f <size> (fake):
The file system is contained in a normal file. The following numeric value is the file size in megabytes. od selects the file to be used, and this must already exist. The default file size is 1 megabyte.
-h <time> (hold time):
Set the drive hold time limit in seconds to the following numeric value. The following value is the drive hold time in seconds. To optimize access to resources, the user may limit the length of time each volume may remain active in a drive while other volumes are awaiting access. Once this period has elapsed, if no updates are pending, the volume may be removed from the drive and placed at the end of the queue of waiting volumes. If a value of zero is supplied, no hold time limit is imposed. The default hold time is 5 minutes.
-i (invert):
Reverse the byte order in numeric values when writing file index and directory records. This option may be specified to maintain consistency on file systems imported from machines employing a difference byte order in numeric values. It only controls the byte order written; index records and directories recorded in either byte order will always be interpreted correctly.
-l <length> (buffer length):
Set the file buffer length in blocks to the following numeric value. To optimize access latency, physical data transfers to and from the optical storage media are accumulated and performed several blocks at a time. This parameter may be varied to take optimal advantage of the characteristics of various optical disk drives. The default buffer length is 96 kilobytes.
-p <pad> (padding):
Set the minimum free-space allowance in blocks per thousand to the following numeric value. If less than this amount of space remains on the volume, only the super-user may create files. The default padding allowance is 1 block per thousand.
-q <quota> (fault quota):
Set the maximum consecutive error count to the following numeric value (maximum number of contiguous errors accepted without producing a fatal I/O fault). If this value is zero, no error quota is imposed. The default error quota is 24.
-r (read-only):
Mount the file system read only. Physically write-protected volumes are always mounted read only.
-s (slow-mount):
Access every index block while mounting; do not refer to the index list. The normal fast-mount sequence locates file index records using an index map. If any record cannot be read, the corresponding file cannot be accessed. Using the slow-mount option allows a previous version of such a file index record to be located.
-t <time> (flush time):
Set the buffer flush time in seconds to the following numeric value. Due to the write-once nature of optical file storage media, ofsd attempts to maximize optical media utilization by keeping recent updates to a file in memory until the file is closed or flushed. However, to minimize memory usage and to ensure that files remain reasonably current, a memory residence time limit is observed. If a file on an optical file system is not accessed within this time, pending updates will be posted to the optical medium and the memory buffer space occupied will be released for other uses. The default buffer flush time is one minute. If a value of zero is supplied, no time limit is imposed, and file updates may remain pending indefinitely, or until the volume is demounted. If many files will be written within a short interval , this value should be minimized to conserve space in the index cache file.
-z (zero):
Recognize zero blocks as empty space and record zero regions in files as unmapped regions. Conserve space by not replacing regions with identical contents, and by representing empty (zero) regions as gaps in the field index map, since such gaps appear as empty regions when retrieved. checking for identical regions entails reading before writing, which imposed a performance penalty unless the file index map is erased (by opening with the TRUNCATE option) before writing.
Note
Some option values specified here may be altered when a file system is mounted. When a file system is demounted, these option values are restored.
Examples
To attach an optical drive: ofsd /dev/of0 /dev/od0 &
To access a fake file system: ofsd /dev/of0 -f20 odfs &
Files
/ofs/@<volume> optical disk volume index cache.
/ofs/#<volume> optical disk volume index map.
See Also
ofs(4), omount(8), umount(8)
Optical File System File Access (ofile)
Synopsis
ofile [-is] [-b size] [-f size] [-l length] [-q quota] [-r [date]] [-volume] od [filename ...]
Description
ofile extracts the content of files in an optical file system. The file system od is a character-special device node referencing the od device driver or an ordinary file containing an optical file system. If any filenames are specified, the content of the selected files is sent to the standard output. If no filenames are specified, only the volume usage statistics are presented.
Options
The ofile accepts the following control option switches:
-b (block size):
The following value is the logical block size in bytes. This number must be a power of two not less than 256. The true block size is sensed automatically whenever an optical volume is accessed, so this option is used only for testing. The default block size is 512 bytes.
-f (file size)
Emulate an optical file system in an ordinary file. The following value specifies the size of the file in megabytes. od names the file to be used, which must already exist. The default file size is 1 megabyte.
-i (invert)
Invert the byte order of numeric fields when writing index records and directory entries. This option may be specified to maintain consistency on file systems imported from machines employing a different byte order in numeric values. It only controls the byte order written; index records and directories recorded in either byte order will always be interpreted correctly.
-l (buffer length)
The following value is the file buffer length in blocks. To optimize access latency, physical data transfers from the optical storage media are accumulated and performed several blocks at a time. This parameter may be varied to take optimal advantage of the characteristics of various optical disk drives. The default buffer length is 96 kilo-bytes.
-q (fault quota)
The following value is the maximum number of contiguous errors accepted without producing a fatal I/O fault. If this value is zero, no error quota is imposed. The default error quota is 24.
-r (regress)
Regress - The following value represents a time in the past to which the file system should regress. All files and directories on the volume will appear exactly as they were at that instant in time. More recent files and changes will disappear, and deleted or altered files will be restored to their previous state. The regression time is specified as a single number in any of the forms DD, MMDD, MMDDYY, MMDDhhmm or MMDDYYhhmm, where YY, MM and DD represent the year, month and day, respectively, and hh and mm represent the hour and minute. An initial zero may be omitted. If the month or year is not specified, the most recent matching date is selected. If the hour or minute is not specified, zero is assumed.
-s (slow check)
Slow - Examine every file index record while accessing a file system. The normal fast sequence locates file index records using an index map. If any record cannot be read, the corresponding file cannot be accessed. Using the slow option allows an earlier version of such an index record to be identified.
-# (volume number)
Specify the volume reference number. This informs ofile where the volume was previously mounted so it can find the volume index map and cache files. The volume reference number is the minor device number of the OFS device node used to mount the volume (see omount(8)).
Examples
To extract a file from an optical volume: ofile /dev/od0 file
Files
/ofs/#<volume> optical disk volume index map.
/ofs/$<volume> optical disk volume index cache.
See Also
ofs(4), ofsd(8), ofsck(8), ofind(8), omap(8), omount(8)
Optical File System Find Utility (ofind)
Synopsis
ofind [-rs] [-b size] [-f size] [-l length] [-q quota] od [filename ...]
Description
The OFS find utility (ofind) is supplied with OSMS to locate missing or deleted files on optical disk volumes. The file system to be examined is selected by specifying the appropriate optical driver node, such as /dev/od0. Given one or more pathnames to locate, ofind recursively descends the directory hierarchy, examining all previous generations of any directories encountered, and displays all file index numbers found associated with each pathname.
With no pathname, ofind interactively requests the file index number (and optional generation level) and displays information about the corresponding file.
Options
The ofind utility accepts the following control option switches:
-b <size> (block size):
Set the file system block size in bytes to the following numeric value. This number must be a power of two not less than 256. The true block size is sensed automatically whenever an optical volume is accessed, so this option is used only for testing. The default block size is 512 bytes.
-f <size> (fake):
The file system is contained in a normal file. The following numeric value is the file size in megabytes. od selects the file to be used, which must already exist. The default file size is 1 megabyte.
-l <length> (buffer length):
Set the file buffer length in blocks to the following numeric value. To optimize access latency, physical data transfers from the optical storage media are accumulated and performed several blocks at a time. This parameter may be varied to take optimal advantage of the characteristics of various optical disk drives. The default buffer length is 96 kilo-bytes.
-q <quota> (fault quota):
Set the maximum consecutive error count to the following numeric value. This is the number of contiguous errors accepted without producing a fatal I/O fault. If this value is zero, no error quota is imposed. The default error quota is 24.
-r (raw):
Do not read the volume index map. If this option is specified, ofind will be unable to navigate the file structure on the volume, so its only practical use is in direct interactive inspection by physical block number.
-s (slow-scan):
Access every index block while scanning; do not refer to the index list. Examine every file index record while accessing a file system. The normal fast sequence locates file index records using an index map. If any record cannot be read, the corresponding file cannot be accessed. Using the slow option allows an earlier version of such an index record to be identified.
Note
Type <ctrl>c to exit from this utility.
Examples
To inspect an optical file system: ofind /dev/od0
To locate a missing optical file: ofind /dev/od0 file
Files
/ofs/#<volume> optical disk volume index map.
/ofs/$<volume> optical disk volume index cache.
See Also
ofs(4), ofsd(8)
Optical File System Index Map Utility (omap)
Synopsis
omap [-is] [-b size] [-f size] [-l length] [-q quota] [-r [date]] [-volume] od [filename...]
Description
Optical File System index map (omap) extracts or constructs index records in an optical file system. The file system od is a character-special device node referencing the od device driver or an ordinary file containing an optical file system. If any filenames are specified, an index map of the selected files is generated on the standard output. If no filenames are specified, and od is not writable, an index map of all files is generated. Otherwise, index map text is accepted on the standard input.
The map text comprises one line for each file mapped. The initial portion of an index map line (up to the first space) represents the path name of the file relative to the root of the file system. Special characters in the remainder of the line identify specific fields in the file index record:
space | The following value is the user code in decimal. |
. | The following value is the group code in decimal. |
, | The following value is the file update time in date format%y /%m /%d_%H:%M:%S (see date(1)). The first time value sets the create and modify times; the second time value sets only the modify time. |
* | The following value is the file mode in octal. |
= | The following value is the file size in decimal. |
; | The following value is the number of blocks in the next logical range of data blocks in the file. |
@ | The following value is the initial block number in the next logical range of data blocks in the file. |
If no address is specified for a range of logical blocks in a file, that range of logical blocks remains unmapped. Unmapped blocks within a file appear as all-zero blocks.
Options
The omap utility accepts the following control option switches:
-b <size> (block size):
The following value is the logical block size in bytes. This number must be a power of two not less than 256. The true block size is sensed automatically whenever an optical volume is accessed, so this option is used only for testing. The default block size is 512 bytes.
-f <size> (file size):
Emulate an optical file system in an ordinary file. The following value specifies the size of the file in megabytes. od the file to be used, which must already exist. The default file size is 1 megabyte.
-i (invert):
Invert the byte order of numeric fields when writing index records and directory entries. This option may be specified to maintain consistency on file systems imported from machines employing a different byte order in numeric values. It only controls the byte order written; index records and directories recorded in either byte order will always be interpreted correctly.
-l <length> (buffer length):
The following value is the file buffer length in blocks. To optimize access latency, physical data transfers from the optical storage media are accumulated and performed several blocks at a time. This parameter may be varied to take optimal advantage of the characteristics of various optical disk drives. The default buffer length is 96 kilobytes.
-q <quota> (fault quota):
The following value is the maximum number of contiguous errors accepted without producing a fatal I/O fault. If this value is zero, no error quota is imposed. The default error quota is 24.
-r (read-only):
Read - A map of index records is sent to the standard output. This option is assumed if any filenames are specified, or if od is not writable. Otherwise, a map of index records is read from the standard input.
The following value, if supplied, represents a time in the past to which the file system should regress. All files and directories on the volume will appear exactly as they were at that instant in time. More recent files and changes will disappear, and deleted or altered files will be restored to their previous state. The regression time is specified as a single number in any of the forms:
DD
MMDD
MMDDYY
MMDDhhmm
MMDDYYhhmm
where YY represents the year, MM represents the month, and DD represents the day, and hh represents the hour and mm represents the minute. An initial zero may be omitted. If the month or year is not specified, the most recent matching date is selected. If the hour or minute is not specified, zero is assumed.
-s (slow mount):
Slow - Examine every file index record while accessing a file system. The normal fast sequence locates file index records using an index map. If any record cannot be read, the corresponding file cannot be accessed. Using the slow option allows an earlier version of such an x record to be identified.
-#:
Specify the volume reference number. This informs omap where the volume was previously mounted so it can find the volume index map and cache files. The volume reference number is the minor device number of the OFS device node used to mount the volume (see omount (8)).
Examples
To produce a map of an optical file system:
omap -r /dev/od0 > map
To obtain a map of a specific optical file:
omap /dev/od0 file > map
To incorporate a fresh or revised file map:
omap /dev/od0 < map
Files
/ofs/#<volume> optical disk volume index map.
/ofs/$<volume> optical disk volume index cache.
See Also
ofs(4), ofsd(8), ofsck(8), ofind(8), ofile(8), omount(8)
Optical File System Link Utility (olink)
Synopsis
olink ofs [filename ...]
Description
The OFS link utility (olink) is a command script supplied with OSMS to restore missing or deleted files on optical disk volumes. The file system to be examined is selected by specifying the mount point directory, and the files to be restored are specified by full pathnames relative to the mount point.
If ofind returns a unique index number for any pathname, a link is inserted at the appropriate point in the directory tree referencing that index number. If several different file index numbers were found associated with a single pathname, the attributes of all are displayed and the user is asked to select which, if any, to restore.
If a pathname:
Cannot be found in any previous generation of the directory tree, or
If it already identifies an existing file, or
No reference is found to any pathname no action is taken on that pathname.
olink makes use of a special feature of the OFS daemon permitting files to be identified by index number rather than by name. Any component of an optical file pathname beginning with an ASCII delete character code (preceded by a reverse-slash (\) when entered from the keyboard) is interpreted as a decimal number specifying the file index. This method of file designation may be used interchangeably with conventional pathnames on optical volumes. However, this use is not applicable to standard UNIX magnetic disk file systems.
Examples
To restore a missing optical file: olink /ofs file
See Also
ofind(1), ofs(4), ofsd(8)
Optical File System Mounting Table (ofstab)
The administrator can make OSMS mount selected optical volumes automatically at system boot time by creating the file /etc/ofstab. The information about the volumes to be mounted are to be specified in this file. When the ofstab file exist, the startup script /sbin/init.d/ofs will only mount the volumes specified in the ofstab file.
Synopsis
/etc/ofstab
Description
The file /etc/ofstab contains entries for optical file systems to mount using the omount(8) command, which is normally invoked by the /etc/rc.local script at system startup time.
Each entry consists of a line of the form: device, directory, label, options
Device
is the pathname of a character-special device node referencing the ofs pseudo-device driver (see ofs(4)).
Directory
is the pathname of the directory on which to mount the file system.
Label
is an optional identifier for the file system. label should be a single word (or a string enclosed in quotes) not over 20 characters long. omount uses label to verify the identity of the file system.
Options
represents a space-separated list of mounting options in the form accepted by omount. Several options may be combined into a single word, but each option word must be introduced by a hyphen.
Options
-a (automatic)
Do not mount this file system automatically (using omount -a).
-g (group)
Create files with BSD semantics for propagation of the group ID. With this option, files inherit the group ID of the directory in which they are created, regardless of the value of the directory's set-GID bit.
-l (buffer length)
The following value is the file buffer length in blocks. To optimize access latency, physical data transfers to and from the optical storage media are accumulated and performed several blocks at a time. This parameter may be varied to take optimal advantage of the characteristics of various optical disk drives. The default buffer length is 96 kilobytes.
-m (no mount)
Do not permit other file systems to be mounted on directory nodes within this file system.
-p (padding)
The following value is the free space padding allowance in blocks per thousand. If less than this amount of space remains on the volume, only the super-user may create files. The default padding allowance is one block per thousand.
-q (error quota)
The following value is the maximum number of contiguous errors accepted without producing a fatal I/O fault. If this value is zero, no error quota is imposed. The default error quota is 24.
-r (read only)
Read-only - Mount this file system read-only. Physically write-protected volumes are always mounted read-only, whether or not this option is specified.
-s (slow mount)
Slow mount - Access every file index record when mounting this file system. The normal fast-mount sequence locates file index records using a vector list. If any record can-not be read, the corresponding file cannot be accessed. Using the slow-mount option allows a previous version of a file index record to be identified.
-t (flush time)
The following value is the buffer flush time in seconds. Due to the write-once nature of optical file storage media, the ofs daemon attempts to maximize media utilization by keeping recent updates to a file in memory until the file is closed or flushed. However, to minimize memory usage and to insure that files remain reasonably current, a memory residence time limit is observed. If no access occurs to a file on the file system within this time, any pending updates will be posted to the optical medium and the memory buffer space occupied will be released for other uses. The default value of this parameter is one minute. If a value of zero is supplied, no time limit is imposed, and file updates may remain pending indefinitely, or until the file system is demounted.
-x (execution)
Do not permit set-UID execution of programs on this file system.
A pound-sign (#) as the first character of a word identifies the rest of that line as a comment to be ignored by omount.
/etc/ofstab is only read by omount, and not written; it is the duty of the system administrator to properly create and maintain this file.
The order of records in /etc/ofstab is important because omount processes the file sequentially; the entry for a file system must appear before the entries for any file systems to be mounted within it.
Files
/etc/ofstab
See Also
ofs(4), omount(8)
Optical File System Mount Utility (omount)
Synopsis
omount -a[cfkuvz] [-h hold] [-l length] [-p pad] [-q quota] [-r [date]] [-t time]
omount [-cfiksuz] [-h hold] [-l length] [-p pad] [-q quota] [-r [date]] [-t time] ofs dir [tag]
omount [-cfiksuz] [-h hold] [-l length] [-p pad] [-q quota] [-r [date]] [-t time] ofs | dir | tag
Description
The OFS mount utility (omount) is supplied with OSMS to associate an optical disk file system with a directory node in another mounted file system, called the mount point. The optical disk file system is selected by specifying an OFS driver node, such as /dev/of1, and the mount point is designated by a full path name starting from the root directory, such as /usr/local/mnt.
Options
The omount utility accepts the following control option switches:
-a (mount all):
Attempt to mount all file systems described in the /etc/ofstab directory. In this case, ofs and dir are taken from /etc/ofstab. Filesystems are not necessarily mounted in the order they appear in /etc/ofstab.
-c (consistency):
Ensure file consistency by flushing index records to disk when files are closed. Insure file consistency by flushing file index records to disk when files are closed. If this option is not specified, index records are held in memory for a time after files are closed to facilitate attribute updates. If this option is specified, files written by archive programs such as tar, which alter file attributes after files are closed, will have redundant index records.
-f (file):
Mount this volume as a single large file rather than as a file system. Mount the volume as a single large file rather than as a file system. This permits direct read-only access to the entire volume as provided by the raw device driver. Attempting to access blocks on the volume that have not been recorded will produce an end-of-medium indication.
-h <time>(hold time):
Set the drive hold time limit in seconds to the following numeric value. The following value is the drive hold time in seconds. To optimize access to resources, the user may limit the length of time each volume may remain active in a drive while other volumes are awaiting access. Once this period has elapsed, if no updates are pending, the volume may be removed from the drive and placed at the end of the queue of waiting volumes. If a value of zero is supplied, no hold time limit is imposed. The default hold time is five minutes.
-i (invert):
Reverse the byte order in numeric values when writing file index and directory records. This option may be specified to maintain consistency on file systems imported from machines employing a different byte order in numeric values. It only controls the byte order written; index records and directories recorded in either byte order will always be interpreted correctly.
-k (keep):
Keep optical file attributes in a magnetic disk file for rapid access while the volume is not immediately available. Volumes included in a model should use this option. This option is useful for access by index number only (as appears in a CFS link) since name lookup always entails direct volume access.
-l <length> (buffer length):
Set the file buffer length in blocks to the following numeric value. To optimize access latency, physical data transfers to and from the optical storage media are accumulated and performed several blocks at a time. This parameter may be varied to take optimal advantage of the characteristics of various optical disk drives. The default buffer length is 96 kilobytes.
-n (no entry):
Mount the file system without making an entry in the /etc/mtab directory.
-o (options):
The following text comprises a comma-separated string of file system options from this list:
buffer=# Set buffer length in blocks (see -1).
check Insure file system consistency (see -c).
date=# Set file system regression time (see -r).
eror=# Set I/O error retry quota (see -q).
flush=# Set flush time in seconds (see -t).
hold=# Set hold time in seconds (see -h).
invert Invert byte order on write (see -i).
keep Keep attributes in cache file (see -k).
length=# Set buffer length in blocks (see -l).
min=# Set free-space padding factor (see -p).
nodev Suppress device node access (see mount(8)).
noexec Suppress program execution (see mount(8)).
nosuid Suppress set-uid execution (see mount(8)).
pad=# Set free-space padding factor (see -p).
quota=# Set I/O error retry quota (see -q).
rw/ro Set read-only or read/write (see mount(8)).
scan/slow Scan entire volume index (see -s).
time=# Set flush time in seconds (see -t).
unload Unload volume when idle (see -u).
zero Suppress zero/duplicate write (see -z).
-p <pad> (padding):
Set the minimum free-space padding allowance in blocks per thousand to the numeric value specified in <pad>. If less than this amount of space remains on the volume, only the super-user can create files. The <pad> is optional. The default padding allowance is one block per thousand.
-q <quota> (fault quota):
The following value is the maximum number of contiguous errors accepted without producing a fatal I/O fault. If this value is zero, no error quota is imposed. The default error quota is 24.-r (read-only): Mount the file system read only. Physically write-protected volumes are always mounted read only.
-r <date> (regress):
Mount the file system read only. Physically write-protected volumes are always mounted read only. The <date> is optional. If <date> is specified, mount the file system as it existed at a previous time as specified in <date>. The numeric value for <date> specifies the time in one of the following forms:
DD
MMDD
MMDDYY
MMDDhhmm
MMDDYYhhmm
Where YY, MM, and DD designate the year, month, and day, respectively, and hh and mm designate the hour and minute in local time. An initial zero may be omitted. If the year or month is not specified, the most recent date is assumed. If the hour or minute is not specified, zero is assumed. All files and directories on the volume will appear exactly as they were at that instant in time. More recent files and changes will disappear, and deleted or altered files will be restored to their previous state.
-s (slow-mount):
Access every index block while mounting; do not refer to the index list. The normal fast-mount sequence locates file index records using an index map. If any record cannot be read, the corresponding file cannot be accessed. Using the slow-mount option allows a previous version of such a file index record to be located.
-t <time> (flush time):
Set the buffer flush time in seconds to the following numeric value. The following value is the buffer flush time in seconds. Due to the write-once nature of optical file storage media, the OFS daemon attempts to maximize media utilization by keeping recent updates to a file in memory until the file is closed or flushed. However, to minimize memory usage and to insure that files remain reasonably current, a memory residence time limit is observed. If no access occurs to a file on the file system within this time, any pending updates will be posted to the optical medium and the memory buffer space occupied will be released for other uses. The default value of this parameter is one minute. If a value of zero is supplied, no time limit is imposed, and file updates may remain pending indefinitely, or until the file system is demounted.
-u (unload):
Return this volume to its storage slot whenever it becomes idle. This facilitates access to other volumes.
-v (verbose):
Display a message as each file system is mounted.
-x (execute):
Do not permit set-uid on execution of any program in this file system.
-z:
Conserve space by not replacing regions with identical contents, and by representing empty (zero) regions as gaps in the file index map, since such gaps appear as empty regions when retrieved. Checking for identical regions entails reading before writing, which imposes a performance penalty unless the file index map is erased (by opening with the TRUNCATE option) before writing.
Note
The default values stated are initial defaults, which may be altered by option parameters passed to the OFS daemon on initiation. When a file system mounted with any of these options is demounted, the default values set by the OFS daemon are restored.
Examples
In the following examples, /od is the mount point.
To mount a volume read-write: omount /dev/of0 /od
To mount a volume read-only: omount -r /dev/of0 /od
To specify the flush time: omount -t60 /dev/of0 /od
To select the buffer size: omount -l40 /dev /of0 /od
To set a regression date: omount -r930 /dev/of0 /od
To select dir from ofstab: omount /dev/of0
To select ofs from ofstab: omount /od
To mount all ofs volumes: omount -a
Files
/etc/ofstab
Table of optical file systems.
See Also
ofs(4), ofstab(5), ofsd(8), mount(8), umount(8)
Bugs
Filesystem label validation is not yet implemented.
If the directory on which a file system is to be mounted is specified by a symbolic link, the file system is mounted on the directory to which the symbolic link refers, rather than being mounted on top of the symbolic link itself.
Optical File System Over Utility (over)
Synopsis
over [-abfnvz] [-l length] [-p pattern] [-q quota] [-s source] od [... od]
Description
The Over utility is employed to clear a rewritable optical disk volume for use with OSMS. OSMS employs a similar file structure on WORM and rewritable volumes. However, it requires a special structure on rewritable volumes to identify available space. Since optical disk drives do not support the blank check feature on rewritable volumes, OSMS identifies any block containing an all-zero data pattern as available space.
New rewritable media should be erased before mounting to ensure this structure is present and its contents are accurate.
Options
The Over utility accepts the following control option switches:
-a (clear):
Clear all blocks on the volume (assumes the f and z switches). If this option is not specified, the file system on the volume is inspected and only those blocks actually in use are cleared. Selecting this option also sets the -f and -z options..
-b (Block)
The following value is the initial block number to clear. The default initial block number is zero.
-f (force):
Force - clear the volume even though it may contain active nodes. If this option is not specified, the file system is checked to insure it has no active nodes other than its root directory and volumes containing active files will not be cleared. If any other active nodes are discovered, the volume is not cleared. This option is assumed if the -a option is specified.
-l (buffer length)
The following value is the buffer length in blocks. To optimize access latency, physical data transfers should approximate a multiple of the data transfer limit. The default buffer length is 248 kilobytes.
-n (blocks to clear)
The following value is the number of blocks to clear. The default block count is calculated from the existing space utilization on the volume (unless the -a option is specified) the initial block number and the total number of blocks on the volume.
-p (pattern to record)
The following value is the data pattern to record. The pattern is concatenated with itself if necessary to produce a longword value. The default data pattern is zero. Selecting this option also sets the-a option.
-q (value contiguous errors)
The following value is the maximum number of contiguous errors accepted without producing a fatal I/O fault. If this value is zero, no error quota is imposed. The default error quota is 24.
-s (source)
The following name specifies a device or file containing an optical file system. Subsequent devices specified will be initialized with a duplicate of this file system, taking account of differences in the number of blocks on each volume. The physical block sizes on the source volume must be the same as each of the targets. over will only write to a WORM volume to copy from another WORM volume.
-v (verbose):
Verbose - Display status and progress messages.
-z (clear active blocks to 0)
Clear all active blocks on the volume to zero. Displays progress messages while clearing. If this option is specified, an all-zero pattern is written in active blocks on the volume, allowing the volume to mount in the write-once mode. Otherwise, only active index blocks on the volume are cleared and an index block is created for the free space pool, allowing the volume to mount in the rewritable mode. This option is assumed if the -a option is specified.
Examples
To clear an empty volume: over /dev/od0
To clear an active volume: over -f /dev/od0
To clear a volume to zero: over -z /dev/od0
See Also
ofs(4), ofsd(8), ofsck(8), omount(8)
Optical File System Update Daemon (update)
Synopsis
update [interval]
Description
The update daemon is an enhanced version of the standard UNIX File System sync utility, update. The version supplied with OSMS is the same as the standard version with the added facility of specifying the file system sync interval in seconds.
The default value of the sync interval is 30 seconds, the same interval used by the standard version. Thus, with no parameters, the enhanced version operates exactly as the standard version. However, the optional parameter permits the file system sync interval to be varied to improve the resolution of the file residence time parameter in the OFS daemon.
See Also
sync(1), sync(2), init(8)