13.  Managing NFS Server

Introduction

This chapter describes how to manage the NFS-OpenVMS Server. It includes the following topics:

•   Server security

•   Mounting directories on a client

•   Network file locking

•   Managing Server parameters

•   Maintaining databases

•   PCNFSD services and remote printing

•   Filename mapping

•   Server implementation of NFS protocols

See the NETCU Command Reference, Table 1-4, for a description of the commands used to manage NFS servers and clients.

Server Security

The Server provides several features that maintain the integrity of the OpenVMS filesystem.

First, the Server requires that the local system must register any user trying to access OpenVMS files. You do this through the PROXY database when you configure the Server and through later modifications as needed.

Second, you must export an OpenVMS directory for an NFS user to access it. The Server does this through the EXPORT database when you configure the Server and through later modifications as needed.

You can take the following additional system security measures:

•   Assign an NFS rights identifier to further restrict file access (see the NFS_ACCESS_IDENTIFIER logical under Server Parameters).

•   Require all Remote Procedure Call (RPC) requests to originate from privileged ports.

•   Restrict all remote mounts to the NFS superuser only.

•   Restrict mounts only to explicit directories and not their subdirectories.

•   Require the PROXY database to define the mount requester's identification (see the next section).

PROXY Database

The PROXY database maps OpenVMS user identification to NFS user identification. NFS user identification is different from that of OpenVMS in that it follows the UNIX model.

OpenVMS identifies users by a username and user identification code (UIC). The UIC consists of a group and a member number. An OpenVMS user can belong to only one group, which can have many members.

NFS follows the UNIX model in identifying users by user ID (UID) and group ID (GID) numbers. An NFS user can belong to many groups, and thus have several GIDs. Each NFS request includes the NFS user's effective UID, GID, or list of GIDs. You can find users' UIDs and GIDs in the
/etc/passwd file on the UNIX client.

The Server uses the PROXY database:

When an NFS user requests access to the OpenVMS filesystem

TCPware maps an NFS user's UID and GID to an OpenVMS username and UIC. The Server uses the UIC to check file access permission.

When the NFS client requests file attributes from the server

The Server maps the file owner's UIC to a UID/GID pair.

When a PC requests authentication using PCNFSD

The Server uses the username and password to validate the user, and the UIC to find and return a UID/GID pair.

 

Maintaining PROXY

The Server creates an empty PROXY database during installation. You maintain the PROXY database with the ADD PROXY, CREATE PROXY, REMOVE PROXY, and SHOW PROXY commands in TCPware Network Control Utility (NETCU).

A PROXY database entry specifies an OpenVMS username and a corresponding NFS user's UID and GID. The /HOST qualifier with the ADD PROXY command also lets you specify the name of the hosts or hosts where the user account is valid.

The following example shows how to use the ADD PROXY command to assign the SMITH OpenVMS account to an NFS user with a UID=210 and GID=15 on host tulip:

$ NETCU
NETCU>ADD PROXY SMITH /UID=210 /GID=15 /HOST=TULIP

 

The PROXY database must contain an entry for each NFS user, including the superuser (see the next subsection).

When you add entries to the PROXY database:

•   The OpenVMS username determines file access rights, not the NFS user's UID and GID. The NFS user account has the same access rights as are assigned the OpenVMS account.

•   Assign each NFS user the same UID/GID on each NFS client. (See your NFS client documentation for details on global user ID space.)

•   Avoid using wildcard UIDs or GIDs. A one-to-one mapping between OpenVMS users and NFS users is easier to maintain.

•   Use the /HOST qualifier to allow access only to users from a particular host.

•   For PCNFSD users, assign an arbitrary UID and GID for each PC user. Choose a unique UID for each user. Give the same GID to users that need to have group access to each other's files.

Adding Superusers

The superuser (or root) is a UNIX system user with UID=0 who can perform any operation on a file or process on the client system. However, the superuser cannot automatically access the OpenVMS filesystem on the server. The PROXY database must register a superuser.

The NFS convention is to replace the superuser's UID/GID pair (UID=0, and any GID) with the default values of UID=-2 and GID=-2. By UNIX conventions, this translates to user nobody, which gives the superuser limited access rights. To register a superuser, you must use UID=0 and GID=1, as follows:

NETCU>ADD PROXY DECNET /UID=0 /GID=1


The OpenVMS account to which you assign the superuser access rights determines what rights a superuser has on the OpenVMS system. Superusers require enough access rights so that they can mount directories. (In fact, some server configurations restrict mounting to the superuser). Also, when a user runs a setuid to root program, the UID/GID in any resulting NFS request has the root UID and, therefore, requires superuser access.

You can create a PROXY entry for a superuser that provides limited access to an OpenVMS filesystem but still allows a superuser to mount exported directories. One example is the DECNET account. Alternately, you can use the OpenVMS AUTHORIZE command to add an account for the superuser on the OpenVMS host.

If you have trusted superusers at particular hosts and wish to give them full privileges on the OpenVMS system, add a separate superuser entry. Assign the superuser to a privileged account (such as SYSTEM) and use the /HOST qualifier to restrict access to a specified host. In the following example, only the superuser on lilac has SYSTEM account privileges:

NETCU>ADD PROXY SYSTEM /UID=0 /GID=1 /HOST=LILAC

Reloading PROXY

The PROXY database is normally static. This means that you have to reload the database every time you use ADD PROXY or REMOVE PROXY to change it. However, you can opt to update the PROXY database dynamically (make it dynamic). You can do so in two ways:

1   Define the TCPWARE_NFS_DYNAMIC_PROXY logical to enable dynamic PROXY database reloading, as follows:

$ DEFINE/SYSTEM/EXEC TCPWARE_NFS_DYNAMIC_PROXY keyword[,keyword]

The keywords are CLIENT, SERVER, NOCLIENT, and NOSERVER, used in any reasonable combination.

Use CLIENT to enable Client reloading and SERVER to enable Server reloading. However, the /NOCLIENT and /NOSERVER qualifiers used with the ADD PROXY or REMOVE PROXY commands override the logical setting.

2   Use the /CLIENT or /SERVER qualifiers with the ADD PROXY or REMOVE PROXY commands. You can also mix and match by using /CLIENT with /NOSERVER, /NOCLIENT with /SERVER, and so on. Here is an example of its use:

$ NETCU ADD PROXY SMITH /UID=210 /GID=5 /NOCLIENT /NOSERVER

If you disable PROXY database reloading on either the Client or Server, both of these methods requires the RELOAD PROXY command. RELOAD PROXY is best used if you also specify a username parameter, so that you can reload for a specific username only. Otherwise, it reloads the entire database into memory each time. Therefore, it is best to use RELOAD PROXY at the initial configuration, and only sparingly thereafter.

EXPORT Database

The EXPORT database contains entries that specify an OpenVMS directory and the host or group of hosts allowed to mount that directory. More than one host can access a directory. The EXPORT database differs from the PROXY database in that the Server grants access to a host rather than to a user. If an OpenVMS directory is not in the EXPORT database, an NFS client cannot mount that directory.

An EXPORT database entry specifies a pathname for the OpenVMS directory. Because the OpenVMS device and directory specifications differ from those NFS clients use, the Server lets you reference the OpenVMS directory by a UNIX-style pathname. You can assign any pathname to the OpenVMS directory.

CAUTION!     An authorized user at a remote host can access all subdirectories and files below the export point you specify. Unless you work in a trusted environment, do not export a top level directory, even though it may seem easier to do so. Export only the level of directories that the remote users need, and none higher.

Maintaining EXPORT

The Server creates an empty EXPORT database during installation. You maintain the EXPORT database using the ADD EXPORT, CREATE EXPORT, REMOVE EXPORT, RELOAD EXPORT, and SHOW EXPORT commands in NETCU. For example, the following command places an entry in the EXPORT database:

NETCU>ADD EXPORT "/work/notes" $DISK2:[WORK.NOTES] -
_NETCU>/HOST=(ORCHID, ROSE)

 

This command exports the OpenVMS directory $DISK2:[WORK.NOTES] as path
"/work/notes" to hosts ORCHID and ROSE. The pathname is an arbitrary one selected to reference the OpenVMS directory. The ADD EXPORT command requires that you enclose the pathname in quotes.

When a client mounts a subdirectory of an exported directory, each element in the path beyond the exported path must match the corresponding OpenVMS subdirectory name. Separate each element with a slash (/). For example, suppose the NFS client mounts:

$ DISK2:[WORK.NOTES.LETTERS.STUFF]

 

To match /work/notes, the NFS client uses this path:

/work/notes/letters/stuff

The NFS filename mapping rules apply to the path elements below the export point.

Reloading EXPORT 

Updating the EXPORT database (using ADD EXPORT or REMOVE EXPORT) usually updates only the server on the host executing the command. You must use either the RELOAD EXPORT command, or restart all the other servers on the cluster to implement changes to the EXPORT database on them.

However, you can automatically reload updates to the shared database on the cluster by setting the TCPWARE_NFS_DYNAMIC_EXPORTlogical to CLUSTER, as follows:

$ DEFINE/SYSTEM/EXEC TCPWARE_NFS_DYNAMIC_EXPORT CLUSTER

 

This causes the Server to use locks to communicate changes to all the servers on the cluster. The default for TCPWARE_NFS_DYNAMIC_EXPORT is LOCAL (not to use locks).

EXPORT Options

The options you can specify while adding entries to the EXPORT database are as follows, using the indicated ADD EXPORT command qualifiers:

•   If you want only specified host or hosts to access the exported OpenVMS directory, use the
/HOST=[host[,host,...] qualifier.

•   Whether or not to enable on-the-fly file conversion: /[NO]CONVERT.

•   Whether or not clients can mount subdirectories of a mount point:
/[NO]EXPLICIT_MOUNT.

•   What kind of filename mapping you want to use: /FILENAME=option.

The NFS-OpenVMS Server includes the UPPERCASE keyword for this qualifier (this does not apply to ODS-5 exports).  UPPERCASE changes the default case for exported filenames from lowercase to uppercase, for SRI filename mappings only. The full syntax of the command is


$ NETCU ADD EXPORT /FILENAME=(SRI, UPPERCASE)

Examples of filename conversions are as follows:

VMS Name

Lowercase

Uppercase

foobar.txt

foobar.txt

FOOBAR.TXT

$foobar.txt

FOOBAR.TXT

foobar.txt

foo$bar.txt

fooBAR.TXT

FOObar.txt

 

•   Whether or not you want only the highest version files to appear in a directory request:
/[NO]HIGHEST_VERSION.

•   Whether or not you want incoming requests to originate from a privileged port:
/[NO]PRIVILEGED_PORT.

•   Whether or not you want mount requests to originate from a user mapped in the PROXY database: /[NO]PROXY_CHECK.

•   What kind of record format you want to use for newly created files: /RFM=options.

•   Whether or not you want the server (and not just the client) to perform file access checking:
/[NO]SERVER_ACCESS.

•   Whether or not you want only the superuser to mount filesystems:
/[NO]SUPERUSER_MOUNT.

•   Whether or not the filesystem should be read-only: /[NO]WRITE.

PCNFSD Services

This section describes PCNFSD authentication services and how to configure for remote printing services. The PCNFSD server supports both the PCNFSD Version 1 and Version 2 protocols. (Version 2 offers enhanced printing features.)

PCs and other NFS clients use PCNFSD if they do not have multiuser accounts or do not provide user authentication. PCNFSD lets the client user obtain the UID/GID that the NFS protocol requires. It also provides remote print spooling services.

The Do you want PCNFSD enabled? prompt in the Server configuration procedure allows a YES, NO, and PRINTING-ONLY response. PRINTING-ONLY enables print spooling of files on the server without enabling PCNFSD authentication.

If you configure PRINTING-ONLY, PCNFSD simply discards auth requests. Use this support primarily when you do not want the Server to respond to PCNFS auth requests sent to a broadcast address.

PCNFSD Authentication

To use the Server, the PC user must obtain a UID and GID through PCNFSD authentication. The PC user provides a valid OpenVMS username and password, and PCNFSD provides the UID and GID.

See your PC documentation for the command to specify the username and password.

PCNFSD checks the OpenVMS User Authorization File (UAF) to validate the username and password. If these are valid, PCNFSD uses the UIC to return the corresponding UID/GID from the PROXY database.

When you create an entry for a PC user in the PROXY database, assign any UID. Assign a unique UID to each user and give the same GID to users that need to have group access to files. The UID and GID cannot be wildcards.

If PCNFSD cannot validate a user for any reason, it writes an error message to the Server log file. This message includes the username and internet address of the remote host that issued the request.

Remote PC Printing

PC users can use PCNFSD for remote printing if you:

•   Create a spool directory to hold the files you want printed, as well as a subdirectory for each PC client.

•   Make sure that this spool directory (or a subdirectory) is in the EXPORT database. Note that the EXPORT database should not list the subdirectory for each client host.

For printing large application files from the PC, we recommend adding the EXPORT option using the following NETCU command:

ADD EXPORT "/spool" device:[directory] /NOCONVERT /RFM=UNDEFINED

This overcomes an OpenVMS file conversion buffer error that may occur, especially with files over 32,768 bytes, which the SYSGEN parameter PQL_MBYTLM reflects.

•   Define the NFS_PCNFSD_SPOOL parameter, either during configuration at the Enter the spool directory: prompt or by defining the TCPWARE_PCNFSD_SPOOL logical.

The parameter value must match the NFS pathname to the newly created spool directory. This pathname must be mapped to the OpenVMS spool directory in the EXPORT database. Make sure you enable the NFS_PCNFSD_ENABLE parameter before defining NFS_PCNFSD_SPOOL.

See your PC documentation for printing information for your particular PC.

Mounting Client Directories

NFS clients access OpenVMS files on the NFS server by mounting directories on the client. The MOUNT protocol services the mount request.

Mounting procedures vary by client and may require superuser privileges, or in the case of PC clients, a username and password. Some clients mount a remote directory automatically when they reboot the system (as in the case of fstab). Others mount a remote directory dynamically when they reference the remote file (as with an automount).

Mount procedures require the following information:

•   The pathname of the exported directory that matches the pathname in the EXPORT database

•   The name of the host running the server that contains the files you want mounted

•   A pathname on the client designated as the mount point

Example 14-1 shows a mount command provided by TCPware NFS-OpenVMS Client:

Example 14-1     NFS-OpenVMS Client Mount Command

NFSMOUNT IRIS "/WORK/RECORDS" NFS0:[USERS.MNT]

 

In the example, IRIS is the name of the OpenVMS server host. /WORK/RECORDS is the pathname of the exported directory. NFS0:[USERS.MNT] is the mount point on the OpenVMS client host.

Check your NFS client documentation before mounting directories. Mount commands and procedures vary by operating system. Chapter 13, NFS-OpenVMS Client Management, Client Commands describes the client mount commands.

Network File Locking

The Server supports file locking through its implementation of the Network Lock Manager (NLM) and Network Status Monitor (NSM) protocols. Many NFS client systems support file locking, even on the record and byte level, as long as the byte ranges do not overlap. File locking on the Server is multi-threaded, where the Server can satisfy more than one lock request at a time.

NFS file locking is only advisory. When a client requests a lock on a server file, the goal is for one of its processes to gain exclusive access to this file (or part of the file) and force other processes to wait until the original process releases the lock again. However, the only way NFS denies a client user access to a locked file is if the user also requests a lock on it.

There are two views on network file locking, one from the NFS client's viewpoint and one from the OpenVMS resident user's viewpoint. (See the following sections.)

NFS Client Users' View

When an NFS client user requests an advisory lock on a server filesystem, this sends a lockd request to the NLM of the server also running NFSD. This server checks its lock database to see if it can grant the lock. The server cannot grant the lock if:

•   Another client has the same file (or region or byte range of the file) already locked.

•   An OpenVMS user has the same file open for exclusive access.

•   The server waits to reclaim locks during the grace period described below.

The Server also includes a Network Status Monitor (NSM). The NSM cooperates with other status monitors on the network to notify the NLM of any changes in system status (such as when a crash occurs).

For example, if the server crashes and comes back up, the server NSM notifies the client NSM that it should resend requests for locks in place before the crash, within a certain grace period (usually 45 seconds). You can request new locks only after this grace period. However, if a client with mounted server files crashes, nobody knows to resend lock requests until the client comes back up again.

OpenVMS Users' View

To prevent OpenVMS users from accessing files that NFS clients have locked, the Server's NLM requests NFSD to open these files for exclusive access. This essentially prevents all access to these files by OpenVMS users. When the client releases the lock by closing the file, the NLM requests NFSD to close the file, at which point OpenVMS users again access it.

If network file locking is to occur in a VMScluster environment, we advise exporting a filesystem from a single node in the cluster only. This way, only a single OpenVMS exclusive lock need occur. Client users can then apply locks on files (or parts of files if enabled) without conflicting with exclusive locks applied from other nodes.

Mapping Filenames

Once you mount a filesystem, the Server tries to make the client files recognizable in OpenVMS. Often the filename syntax for NFS files is very different from that of OpenVMS files. For example, NFS filenames do not include file version numbers.

The Server translates (maps) filenames from the client so that your OpenVMS host can recognize and use them. Three types of mapping schemes are available:

•   Stanford Research Institute (SRI) International mapping, the default scheme between NFS and OpenVMS systems

•   PATHWORKS case-insensitive mapping (PATHWORKS)

•   PATHWORKS case-sensitive mapping (PATHWORKS_CASE)

Set up the appropriate filename mapping scheme using the /FILENAME qualifier of the ADD EXPORT command in NETCU. If you do not specify the scheme using this qualifier, the Server uses the SRI International scheme by default.

Table 14-1 shows examples of how the Server maps NFS directory names and filenames using the SRI International mapping scheme. All the client files in the table are NFS files.

For the filename mapping rules, see Appendix A, NFS-to-OpenVMS Filename Mapping.

The filename mapping schemes for the Server and the NFS-OpenVMS Client are identical and totally compatible.

Table 14-1     Server Filename Mapping (does not apply to ODS-5 exports)

Filename on server...

Is mapped to filename on client...

SERVERFILE.;1

serverfile

$C$ASE$S$HIFTED$F$ILE.;1

CaseShiftedFile

DOT.FILE$5NTEXT;1

dot.file.text

DOT$5NDIRECTORY$5NLIST.DIR;1

dot.directory.list (identified as a directory in the UNIX listing)

SPECIAL$5CCHAR$5FFILE.;1

special#char&file

DOLLAR$$$S$IGN$$5CFILE.;1

dollar$Sign$5cfile

 

Protecting Files

The Server protects an OpenVMS file by comparing its protection information with the user's identification and access rights. It then grants or denies access based on the results of these comparisons.

When an NFS user requests access to an OpenVMS file, the Server uses the PROXY database to map the user's user and group identification (UID/GID) on the remote host to a username and UIC on the OpenVMS host. In most cases, this allows the NFS user to have the same access to files as the proxy OpenVMS user.

The NFS client can also do local access checking based on its user and file information and access checking rules before sending the request to the Server host. In some cases, this results in the NFS user not having the same access to files as the proxy OpenVMS user.

The following sections explain how the Server resolves differences between the two filesystems to provide the best possible mapping between client and server.

UIC Protection

The type of access an OpenVMS user has to a file depends how the file and user UICs are related.

OpenVMS has four file ownership categories: SYSTEM, OWNER, GROUP, and WORLD. Each category can have up to four access types: read (R), write (W), execute (E), and delete (D). Each file has a protection mask that defines:

•   The categories assigned to the file

•   The types of access granted to each category

Here is an example of an OpenVMS protection mask:

SYSTEM=RWED, OWNER=RWED, GROUP=RE, WORLD=<NO ACCESS>

UID/GID Protection

NFS uses a similar protection scheme as OpenVMS.

Each NFS user has a UID and GID. The file protection categories are: OWNER, GROUP, and OTHER, with the file access types of read (r), write (w) and execute (x). The NFS user's access to a file depends on how the file and owner UIDs/GIDs are related.

The Server maps OpenVMS and NFS system protection masks and user identifications so that the relationship between a user and a file remains consistent. For example, if an OpenVMS user owns a particular file and an NFS user is mapped to the account through the PROXY database, the NFS client also considers the local user to be the owner of the file.

Note!     In OpenVMS, the owner of the file has absolute control over it. This also applies to files remote users create in the mounted filesystem.

OpenVMS-to-NFS File Attribute Mapping

When the NFS client requests the attributes of a file, the Server maps:

•   The protection mask to an NFS protection mask

•   The owner UIC to a UID/GID

Table 14-2 shows how the Server maps the protection mask from OpenVMS to NFS.

The Server does not map the OpenVMS SYSTEM category and delete (D) access type because they do not exist in the NFS system environment.

The Server maps OpenVMS execute (E) to NFS execute (x). However, the OpenVMS system uses the E access type more often than does NFS. Thus, some files might appear to be executable to an NFS host when they are not.

 

Table 14-2     OpenVMS-to-NFS Protection Mapping 

OpenVMS category...

In NFS is...

With OpenVMS type...

In NFS is...

SYSTEM

(not mapped)

 

 

OWNER

user

R

r

 

 

W

w

 

 

E

x

 

 

D

(not mapped)

GROUP

group

R

r

 

 

W

w

 

 

E

x

 

 

D

(not mapped)

WORLD

other

R

r

 

 

W

w

 

 

E

x

 

 

D

(not mapped)

 

Table 14-3 shows the rules the Server follows to ensure that it correctly maps the UIC to the
UI
D/GID.

If the Server cannot find the UIC in the PROXY database, or the UID or GID are wildcards, the Server returns the default UID or GID.

Table 14-3    UIC-to-UID/GID Mapping Rules

If the file's OWNER UIC...

Then the Server Returns the...

Matches the requesting NFS user's UIC

UID/GID of the requester

Group matches the requesting NFS user's UIC group

GID of the requester and returns the UID from the PROXY database

Does not match the requesting NFS user's UIC

UID/GID from the PROXY database

NFS-to-OpenVMS File Attribute Mapping

When the NFS client sets or changes the attributes of a file, the Server maps the NFS file protection mask to an OpenVMS file protection mask.

Table 14-4 shows how the Server maps the protection mask from NFS to OpenVMS.

Table 14-4     NFS-to-OpenVMS Protection Mapping 

NFS category...

In OpenVMS is...

With NFS type...

In OpenVMS is...

user

OWNER/SYSTEM

r

R

 

 

w

W

 

 

x

E

 

 

 

D (unless ADF denies) 1

group

GROUP

r

R

 

 

w

W

 

 

x

E

 

 

 

D (unless ADF denies) 1

other

WORLD

r

R

 

 

w

W

 

 

x

E

 

 

 

D (unless ADF denies) 1

1The Server allows delete (D) access only if a special attributes data file (ADF) the Server may create (and associates with the file) does not explicitly deny file deletion.

Access Control Lists

Access Control List (ACL) file protection is an OpenVMS feature that grants or denies access to a file based on a rights identifier.

If a file has an ACL, the OpenVMS system first uses the ACL for protection checking. If the ACL grants or denies access, OpenVMS goes no further. If the ACL does not grant or deny access, OpenVMS checks the protection mask.

NFS clients using the OpenVMS filesystem may encounter files or directories protected by ACLs. But since the ACLs are unique to the OpenVMS system, the NFS client only checks the protection mask. If the protection mask denies access, the NFS client does not attempt access, even if the file's ACL overrides the protection.

Because the NFS client uses only the protection mask, it is recommended that OpenVMS files protected by ACLs have:

•   The ACL set to deny access

•   The protection mask set to allow file access

This allows the NFS client to attempt access on the basis of the protection mask, and lets the OpenVMS system control whether access is granted or denied.

When an NFS user creates a file on the OpenVMS host and the directory has an ACL that specifies +DEFAULT, the new file gets the ACL of the directory.

File Formats

The NFS protocol does not define standard file and record formats or a way of representing different types, such as text or data files. Each operating system can have a unique file structure and record format.

The Server provides access to all OpenVMS files. However, even though an NFS client can access a file, the client may not be able to correctly interpret the contents of a file because of the differences in record formats.

The UNIX operating system stores a file as a stream of bytes and uses a line feed (LF) character to mark the end of a text file line. PC systems also store a file as a stream of bytes, but use a carriage-return/line-feed (CRLF) character sequence to mark the end of a text file line. PC systems sometimes also use a Ctrl/Z character to mark the end of a file.

The OpenVMS operating system, with its Record Management Services (RMS), provides many file organizations and record formats. RMS supports sequential, relative, and indexed file organizations. It also supports FIXED, STREAM, STREAM_CR, STREAM_LF, UNDEFINED, VARIABLE, and variable with fixed size control area (VFC) files.

NFS clients most commonly need to share text files. STREAM is the RMS record format that most closely matches PC text files. STREAM_LF is the RMS record format that most closely matches UNIX text files.

In OpenVMS, you can store standard text files in VARIABLE, STREAM_LF, or VFC record format. Most OpenVMS utilities can process these text files regardless of the record format because the utilities access them through RMS.

The intent of the Server is to provide convenient access to the majority of OpenVMS files. Because many OpenVMS text files are VARIABLE or VFC format, the Server converts these files to STREAM or STREAM_LF format as it reads them.

Reading Files

The Server reads all files (except VARIABLE and VFC) block by block without interpreting or converting them. It reads VARIABLE and VFC files by converting them to STREAM or STREAM_LF, based on a selected option. The file on the NFS server remains unchanged.

The Server's automatic file conversion process can cause a slow reading of VARIABLE and VFC files. For example, in returning the file size, it reads the entire file. Full directory listings can also be slow if the directory contains a number of VARIABLE or VFC record format files. If you need frequent access to these files, consider converting them using the OpenVMS CONVERT utilities described inConverting Files Manually .

See the NFS_DIRREAD_LIMIT parameter in Advanced Parameters .

Writing Files

By default, the Server creates STREAM_LF files, but can also create STREAM files on demand. It writes all files except VARIABLE and VFC block by block without interpreting or converting them. If an NFS client tries to write to or change the size of an existing file not having STREAM, STREAM_LF, STREAM_CR, FIXED, or UNDEFINED format, the Server returns an EINVAL error.

Converting Files Manually

You can improve server performance by manually converting files using the OpenVMS CONVERT utilities described in this section.

Variable to STREAM_LF

Use this conversion procedure to make a variable-length file available to a UNIX system client without using the Server's automatic conversion feature. To convert a variable-length record file to STREAM_LF, the command format is:

CONVERT/FDL=TCPWARE:STREAMLF source-file destination-file

 

The source-file specification is the variable-length record file. The destination-file specification is the name of the new file to contain the STREAM_LF records.

STREAM_LF to Variable

Use this conversion procedure to make a file created by a UNIX system client available to an OpenVMS application that does not understand the STREAM_LF record format. To convert a STREAM_LF file to variable-length, the command format is:

CONVERT/FDL=TCPWARE:VMSTEXT source-file destination-file

 

The source-file specification is the STREAM_LF file. The destination-file specification is the name of the new file to contain the variable-length records.

Variable to STREAM

Use this conversion procedure to make an OpenVMS variable-length file available to a PC client. Keep in mind that the Server's automatic conversion procedure uses LF characters, not CRLF character sequences, for record terminators.

To convert a variable-length record file to STREAM format (with CRLF line terminators), the command format is:

CONVERT/FDL=TCPWARE:STREAMCRLF source-file destination-file

 

The source-file specification is the variable-length record file. The destination-file specification is the name of the new file to contain the STREAM records.

Note!     The variable-to-stream conversion does not add a Ctrl/Z to the end of the file. If the PC application requires the Ctrl/Z, use the conversion program the NFS client software provides.

Server Parameters

TCPware provides several basic parameters you can adjust to better suit your needs. To change the value of any of these parameters, invoke the network configuration command procedure (CNFNET) by entering the following command:

$ @TCPWARE:CNFNET NFS

 

The Server also provides advanced parameters that you rarely need to change but appear here for reference purposes only.

The default parameter values appear in parentheses following the parameter name. All parameters are logicals and are static. When you make a change to a parameter, you must stop and restart the Server for the change to take effect. TCPware uses logical names (the parameter names prefixed by TCPWARE_) to communicate the parameters to the NFS server. The STARTNET procedure defines these logicals.

Basic Parameters

The basic parameters described here are in the same order in which the Server prompts you to provide values for them during the NFS configuration procedure. The default setting for each parameter appears in parentheses.

NFS_ACCESS_IDENTIFIER

(null)

Specifies the name of a rights identifier you want assigned to all NFS users. You can then modify the access control lists (ACLs) of files to grant or deny access to holders of the rights identifier. The default is null (no rights identifier).

OpenVMS files protected by ACLs should have the UIC-based protection mask set to allow file access and the ACL set to deny access. This lets the NFS client access on the basis of the protection mask, and lets the OpenVMS system control whether to grant or deny access.

NFS_SECURITY   (0)

Enables various security features. This parameter is a bit mask value (in decimal) as defined in NFS_SECURITY Bit Mask Values.

The following global parameters supersede the values set using the corresponding qualifiers of the ADD EXPORT command, if applicable, as indicated in Table 14-5.

CAUTION!     Do not use bits 0 and 1 for PC clients using PCNFS.

If you use PC-NFS printing with mask value=2, add an entry to the EXPORT database for each client subdirectory (not just a single entry for the spool directory.) The pathname listed in the EXPORT database should be the NFS_PCNFSD_SPOOL parameter value concatenated with the name of the client subdirectory.

If you set bit 5, PC-NFS users can print to batch queues. This may present a security risk, since users could submit batch jobs under a privileged (or another) user by forcing the UID/GID values of their choice.

Disabling use of the intrusion database for PCNFSD, by setting bit 6, affects all exports.

A bit mask 8 value of 128 disables PCNFSD deletion of printed files from the spool directory.

NFS_LOG_CLASS (-1)

Enables the type of information written to the log file TCPWARE:NFSSERVER.LOG This parameter is a bit mask value (in decimal), as defined in Table 14-6.

 

Table 14-5     NFS_SECURITY Bit Mask Values

Bit No.

Mask

Meaning When Set

Supersedes Qualifier for ADD EXPORT

0

1

Superuser mount enabled. Restricts remote mounts of the OpenVMS filesystem to the superuser UID (UID=0)

/[NO]SUPERUSER

1

2

Explicit mount enabled. You can only mount directories (and not their subdirectories) in the EXPORT database.

/[NO]EXPLICIT_MOUNT

2

4

Mount PROXY check. The UID and GID specified in all mount requests must exist in the PROXY database.

/[NO]PROXY_CHECK

3

8

Privileged port check. Requires that all incoming NFS requests originate from privileged ports on the client. Privileged ports are port numbers less than 1024.

/[NO]PRIVILEGED_PORT

4

16

Access checks for all files to the client performed by Server only. The Server reports mode 777 (octal). This allows full use of OpenVMS ACLs used to grant or deny file access. One implication is that the client reports an access mode of rwxrwxrwx for all files.

/[NO]SERVER_ACCESS

5

32

Allow PCNFS batch queue printing.

 

6

64

Disable PCNFS’s use of the intrusion database.

 

(remaining)

 

Reserved for future use.

 

 

You cannot disable fatal errors and the Server writes them to OPCOM. The default (-1) is all classes of information enabled.

Table 14-6     NFS_LOG_CLASS Bit Mask Values

Bit...

Means when set...

Which are...

1

Warnings

Error recovery messages

2

MOUNT requests

MOUNT call messages

4

General

General operation messages

8

Security

Security violation messages

16

NFS errors

NFSERR_IO messages

(remaining)

 

Reserved for future use

 

NFS_PCNFSD_ENABLE (1)

Enables or disables the PCNFSD services support. A value of 1 enables the PCNFSD services support. A value of 0 disables the support. A value of 3 enables print spooling of files on the server without enabling PCNFSD authentication. The logical name for NFS_PCNFSD_ENABLE is TCPWARE_PCNFSD_ENABLE.

NFS_PCNFSD_SPOOL

Specifies the name of the PCNFSD print spool directory as a UNIX style pathname. The directory must be an exported directory. This is, the directory must be an entry in the EXPORT database, or a subdirectory of an exported directory. The logical name for NFS_PCNFSD_SPOOL is TCPWARE_PCNFSD_SPOOL.

If the path specifies a subdirectory of an exported directory, each path element below the exported directory must match the corresponding OpenVMS subdirectory name. The filename translation rules, as described in Appendix A, NFS-to-OpenVMS Filename Mapping, apply to the path elements below the export point.

Note!     Because you export different OpenVMS directories to different clients with the same path, it is possible for the NFS_PCNFSD_SPOOL parameter to refer to different OpenVMS directories depending on which PCNFSD client requests the print spooling services.

 

Advanced Parameters

You should not normally change the parameters described in this section. If you need to change a value for an advanced parameter, edit the TCPWARE_SPECIFIC:[TCPWARE]TCPWARE_CONFIGURE.COM file.

The advanced parameters that follow appear in alphabetical order. The default setting for each parameter is in parentheses.

Parameter

Description

NFS_DFLT_UID (-2), NFS_DFLT_GID (-2)

Specifies the default UID and GID. The Server uses these defaults in the following cases:

•      The server receives a request from a user without a PROXY mapping and who is also the superuser (UID=0, and any GID). The Server replaces the superuser UID and GID with the default UID and GID.

•      The server processes a get attributes request and cannot find a file's owner UIC in the PROXY database. The Server uses the default UID and GID instead.

NFS_DIRLIFE_TIMER (:3)

Sets when to delete internal directory cache data structures. The Server periodically scans these data structures and deletes them if a directory's cache has existed for longer than the NFS_DIRLIFE_TIMER value. This preserves memory. Specify the interval as OpenVMS delta time. The default is 3 minutes.

If you are unfamiliar with delta time, see Chapter 13, NFS-OpenVMS Client Management, Client Commands.

NFS_DIRREAD_LIMIT (-1)

Sets the maximum size in bytes for each file read while processing a get attributes request. If the estimated file size exceeds this value, TCPware does not read the file to determine its exact size and returns an estimated size instead. The estimated file size is always larger than the exact size. The -1 default effectively turns off file size estimation.

This parameter applies only to filesystems exported with the /CONVERT option (the default). A value of 0 disables TCPware from determining exact file sizes on requests.

This parameter may provide the NFS Client with inexact file sizes. This is generally not a problem, but may affect some applications.

NFS_DIRTIME_TIMER (::30)

Sets a time interval that determines when the Server updates the directory access time between NFS operations. Specify the interval as an OpenVMS delta time. The default is 30 seconds.

NFS_FILE_CACHE_SIZE (1024)

Determines the maximum number of files allowed to have attributes in cache at any one time. The number must be larger than the SYSGEN parameter CHANNELCNT. The value must also be larger than the number of combined TCP and UDP threads (see the NFS_TCP_THREADS and NFS_UDP_THREADS parameters).

NFS_NOCHECKSUM (0)

Enables or disables checksum generation for UDP datagrams. This parameter is a boolean value. When the value is 0 (false), the Server generates checksums for outgoing datagrams. When the value is 1 (true), the Server does not generate checksums. Enabling checksums maintains data integrity, and is the default.

Note: Disabling checksums may increase system performance but could have an adverse affect on certain NFS clients.

NFS_OPENFILE_TIMER

(::6)

Sets a time interval (in delta time) a file remains open after you last accessed it. This can speed up request processing since a file can remain open for successive read or write requests. You do not need to open and close it for each request. The default is six seconds. You should not leave a file open for extended time, nor leave it open for too short an interval, which can decrease performance.

 

The following parameters are only meaningful if PCNFSD was enabled during TCPware installation:

Parameter

Description

NFS_PCNFSD_DFLTPRTOPT

Specifies the default print options when submitting a spooled print job for printing. The TCPware logical name for NFS_PCNFSD_DFLPRTOPT is TCPWARE_PCNFSD_DFLTPRTOPT .

NFS_PCNFSD_JOB_LIMIT

Specifies the maximum packet size of the information displaying the queued print jobs. Some systems require this limitation. Note that if the actual queued job information exceeds the byte limit set by this parameter, TCPware truncates the information. The TCPware logical name for NFS_PCNFSD_JOB_LIMIT is TCPWARE_PCNFSD_JOB_LIMIT. If you do not define this logical, TCPware determines the size of the packet at run-time.

NFS_PCNFSD_PRINTER (SYS$PRINT)

Specifies the print queue you want used if the NFS client does not specify a printer. This is an optional parameter and the default is SYS$PRINT when the client does not specify a printer (most clients specify the printer). The TCPware logical name for NFS_PCNFSD_PRINTER is TCPWARE_PCNFSD_PRINTER.

NFS_PCNFSD_PRINTER_LIMIT

Specifies the maximum packet size of the information displaying the printers known on the server. Some systems require this limitation. Note that if the actual printer information exceeds the byte limit set by this parameter, TCPware truncates the information. The TCPware logical name for NFS_PCNFSD_PRINTER_LIMIT is TCPWARE_PCNFSD_PRINTER_LIMIT. If you do not define this logical, TCPware determines the size of the packet at run-time.

NFS_PORT (2049)

Sets the TCP and UDP port through which the NFS, MOUNT, and PCNFSD protocols receive data.

NFS_TCP_THREADS (20)

Controls the number of simultaneously serviced requests received over TCP connections the Server can support. The Server requires a thread for each TCP request it receives. This thread is active for the amount of time it takes the server to receive the request, perform the operation, and send a reply to the client.

The more threads the Server supports, the better the performance, because the Server can process more requests simultaneously. Note that the number of threads has no impact on the number of TCP connections the Server supports.

NFS_UDP_THREADS (20)

This is similar to the NFS_TCP_THREADS parameter but relates to UDP threads.

NFS_XID_CACHE_SIZE

(40)

Sets the maximum number of XID cache entries. The XID cache stores replies to requests for all NFS protocol operations. When the server receives a request for an operation, it checks the XID cache for a reply to the same request. If the server locates the reply, it retransmits it. If it cannot locate the reply, it processes the request normally.

The XID cache prevents the system from transmitting false error messages for operations such as delete, create, rename, and set attributes. When TCPware receives a request for one of these operations, it checks the XID cache for a reply to the same request. If the reply exists, the server retransmits it.

For example, the Server receives a delete file request from a remote host. After the Server deletes the file and sends a success reply, the network loses the reply. Because the remote host does not receive a reply, it sends the delete file request again. Without an XID cache, TCPware would try to process the request again and send a false error message that it could not find the file. The XID cache prevents the system from sending the false error because it stores and retransmits the original reply.

Set the NFS_XID_CACHE_SIZE parameter to at least twice (2 times) the largest of the number of:

•      NFS clients using the NFS Server

•      UDP threads (as set by the NFS_UDP_THREADS parameter)

•      TCP threads (as set by the NFS_TCP_THREADS parameter)

The parameter sets the size of both the UDP and TCP XID caches (each protocol has a separate XID cache).

 

Implementation

This section describes the Server restrictions and implementation of the Network File System (NFS) protocol. The material presented here requires a thorough understanding of the protocols. It does not explain or describe the protocols.

Restrictions

The Server has the following OpenVMS-related restrictions:

•   The Server supports Files-11 ODS-2 structure level disks, ODS-5 formatted disks, and any CD-ROM format.

•   The Server does not implement volume protection. All exported devices should be public devices.

•   The Server does not generate security or audit alarms. However, the Server writes access violations to log file TCPWARE:NFSSERVER.LOG (as long as you enable security logging through the NFS_LOG_CLASS parameter).

•   When creating files and directories, the Server sets the owner UIC of the file or directory to the UIC derived from the UID/GID in the create request authentication information or to the UID/GID in the set attributes information (if available).

NFS Protocol Procedures

The Server implements the following NFS protocol (version 3) procedures (while continuing to support version 2):

Procedures

Description

ACCESS(v3 only)

(access)

The server determines the access rights that a user, as identified by the credentials in the request, has with respect to a file system object.

COMMIT CACHED WRITE DATA

 (v3 only)

(commit)

The server forces data to stable storage that was previously written with an asynchronous write call.

CREATE FILE (create)

The server creates files using the record format specified in the EXPORT database entry. The client may specify one of 3 methods to create the file:

UNCHECKED: File is created without checking for the existence of a duplicate file.

GUARDED: Checks for the presence of a duplicate and fails the request if a duplicate exists.

EXCLUSIVE: Follows exclusive creation semantics, using a verifier to ensure exclusive creation of the target.

GET ATTRIBUTES (getattr)

Gets a file's attributes. The Server handles certain file attributes in ways that are compatible with the OpenVMS system. These attributes are:

File protection—The Server maps the OpenVMS file protection mask to the UNIX file protection mask.

Number of links—Although OpenVMS supports hard links, it does not maintain a reference count. Therefore, the Server sets this value to 1 for regular files and 2 for directory files.

UID/GID—The Server maps a file owner's UIC to a UID/GID pair through the PROXY database.

Device number—The Server returns the device number as -1.

Bytes used—The total number of bytes used by the file.

Filesystem id—The Server returns the filesystem ID as 0.

Access, modify, status change times—The OpenVMS system does not maintain the same file times as NFS requires. The Server returns the OpenVMS revision (modify) time for all three NFS times.

For directory files, the Server returns the access, status change, and modify times as a reasonably recent time, based on the time of the last Server-initiated directory change, and the NFS_DIRTIME_TIMER parameter. This is a benefit to clients that cache directory entries based on the directory times.

OpenVMS bases its time on local time, while UNIX bases its time on Universal time (or Greenwich mean time), and these times may not agree. The offset from Universal time specified when configuring TCPware resolves the difference between local and Universal time.

GET DYNAMIC FILESYSTEM INFO (v3 only)

(fsstat)

The server provides volatile information about a filesystem, including:

–      total size and free space (in bytes)

–      total number of files and free slots

–      estimate of time between file system modifications

GET FILESYSTEM STATISTICS (v2 only)

(statfs)

 

Returns filesystem statistics. The Server handles certain file attributes in ways that are compatible with the OpenVMS system. These attributes are:

Block size—The block size is 1024.

Total number of blocks—The total number of blocks is the SYS$GETDVI MAXBLOCK parameter divided by 2.

Blocks free—The number of blocks free is the SYS$GETDVI FREEBLOCK parameter divided by 2.

Blocks available—The number of blocks available to unprivileged users is the same as the number of blocks free.

GET STATIC FILESYSTEM INFO (v3 only)
(fsinfo)

The server provides nonvolatile information about a filesystem, including:

–      preferred and maximum read transfer sizes

–      preferred and maximum write transfer sizes

–      flags for support of hard links and symbolic links

–      preferred transfer size of readdir replies

–      server time granularity

–      whether or not times can be set in a settaddr request

LINK(link)

Creates a hard link to a file. The Server stores the link count in an application access control entry (ACE) on the file.

LOOKUP FILE(lookup)

Looks up a file name. If the file name does not have a file extension, the Server first searches for a directory with the specified name. If the Server fails to locate a directory, it searches for the file name without an extension.

MAKE DIRECTORY(mkdir)

Creates a directory. The OpenVMS system does not allow the remote host to create more than eight directory levels from the root of the OpenVMS filesystem. The Server ignores access and modify times in the request.

READ DIRECTORY(readdir)

Reads a directory. The Server returns file names using the filename mapping scheme as specified in the EXPORT database entry. The Server also drops the VMS version number from the file name for the highest version of the file.

READ DIRECTORY PLUS ATTRIBUTES (v3 only)

(readdirplus)

In addition to file names, the server returns file handles and attributes in an extended directory list.

READ FROM FILE(read)

Reads from a file. The Server converts VARIABLE and VFC files to STREAM or STREAM_LF format (depending on the option set) as it reads them. The server returns EOF when detected.

REMOVE DIRECTORY(rmdir)

Deletes a directory.

REMOVE FILE(remove)

Deletes a file.

RENAME FILE(rename)

Renames a file. If the destination filename is the same as an existing filename and the destination filename does not have a zero or negative version number, the Server overwrites the existing file.

READ LINK(readlink)

Reads the contents of a symbolic link.

SET ATTRIBUTES(setattr)

Sets file attributes. The Server handles certain file attributes in ways that are compatible with the OpenVMS system. These attributes are:

File protection—The Server maps the UNIX file protection mask to the OpenVMS file protection mask, as shown earlier in this chapter.

UID/GID—The client changes the file owner's UIC. The PROXY database maps the new UID/GID to an OpenVMS UIC. If the Server cannot locate the new UID/GID in the database, it returns an error and does not change the owner UIC.

Size—If the file size is larger than the allocated size, the Server extends the file. If the size is 0, the Server truncates the file and sets the record attributes to sequential STREAM_LF. You cannot change the size of variable length or VFC files (except to zero).

Access time—Changing the access time has no effect on the OpenVMS system.

Modify time—The modify time updates the OpenVMS revision time.

SYMBOLIC LINK(symlink)

Creates a symbolic link. The Server creates the file with an undefined record structure and uses an application ACE on the file to mask it as a symbolic link.

WRITE TO FILE(write)

Writes to a file. The Server does not allow a remote host to write to a directory file, or to VARIABLE and VFC files.

If the Server allowed a remote host to write to an existing OpenVMS file that was not a STREAM_LF or fixed-length record format file, the file could become corrupted. The Server does not allow a remote host to explicitly change the record format of an OpenVMS file.

The Server can return the non-standard NFS error ETXTBSY (26) and EINVAL (22). The Server returns ETXTBSY when an OpenVMS user has a file open for exclusive access and an NFS user tries to use the file in a way that is inconsistent with the way the OpenVMS user opened the file. The Server returns EINVAL if an NFS user tries to write to or change the size of a VARIABLE or VFC record format file.

With Version 3, the server will support asynchronous writes (see commit).

 

PCNFSD Protocol Procedures

The NFS Server implements both the PCNFSD Version 1 and Version 2 protocol procedures, offers printer support, and offers additional break-in security.

PCNFSD Version 1

The PCNFSD Version 1 procedures include:

AUTHENTICATE

Performs user authentication. Maps a username and password into a UID/GID pair from the PROXY database.

INITIALIZE PRINTER

Prepares for remote printing. Returns the pathname of the client's spool directory. The Server concatenates the spool directory path (derived from NFS_PCNFSD_SPOOL parameter) with the client name.

NULL

The null procedure; standard for all RPC programs.

START PRINTING

Submits a spooled print job for printing. The print data is in a file created in the spool directory, which the Server identifies by the client name. If the user omits a printer, the Server uses the default printer set by the NFS_PCNFSD_PRINTER parameter. (See Print Options .)

 

PCNFSD Version 2

The supported PCNFSD Version 2 procedures include:

ALERT OPERATOR

Sends a message to the system operator. If the user does not specify a printer, the Server uses the default printer set by the NFS_PCNFSD_PRINTER parameter. You cannot use batch queues.

AUTHENTICATE

Performs user authentication. Maps a username and password into a UID/GID pair from the PROXY database.

CANCEL PRINT

Cancels a print job. If the user does not specify a printer, the Server uses the default printer set by the NFS_PCNFSD_PRINTER parameter. You cannot use batch queues.

HOLD PRINT

Places a hold on a previously submitted print job. The job remains in the queue but the Server does not print it. If the user does not specify a printer, the Server uses the default printer set by the NFS_PCNFSD_PRINTER parameter. You cannot use batch queues.

INFORMATION

Determines which services the current PCNFSD implementation supports.

INITIALIZE PRINTER

Prepares for remote printing. Returns the pathname of the client's spool directory. The Server concatenates the spool directory path (derived from NFS_PCNFSD_SPOOL parameter) with the client name.

LIST PRINTERS

Lists all printers known on the server, except if the NFS_PCNFSD_PRINTER_LIMIT parameter sets the packet size less than the actual amount of information.

LIST QUEUE

Lists all or part of the queued jobs for a printer, depending on how you set the NFS_PCNFSD_JOB_LIMIT parameter.

NULL

The null procedure; standard for all RPC programs.

PRINTER STATUS

Determines the status of a printer. If the user does not specify a printer, the Server uses the default printer set by the NFS_PCNFSD_PRINTER parameter. You cannot use batch queues.

RELEASE PRINT

Releases the "hold" on a previously held print job. If the user does not specify a printer, the Server uses the default printer set by the NFS_PCNFSD_PRINTER parameter. You cannot use batch queues.

START PRINTING

Submits a spooled print job for printing. The print data is in a file created in the spool directory, which the Server identifies by the client name. If the user does not specify a printer, the Server uses the default printer set by the NFS_PCNFSD_PRINTER parameter. The Server submits the job using the print options described next.

 

Print Options

The NFS Server submits the job using the print options in Table 14-7.

Table 14-7     Print Options Settings 

Print option...

Has values...

r (RFM type)

u (undefined), s (stream), l (streamlf), c (streamcr), n (none)*

f (file flag)

+ (flag)

- (no flag)*

e (paginate)

+ (feed)

- (no feed)*

h (page header)

+ (header)

- (no header)*

s (double space)

+ (double space)

- (no space)*

b (file burst)

+ (burst)

- (no burst)*

t (file trailer)

+ (trailer)

- (no trailer)*

p (log spool)

+ (log spool)

- (no log spool)*

l (passall)**

+ (passall)

- (no passall)*

c (number of copies)

character string in range from 1 to 255, otherwise=1*

*  Defaults set by PCNFSD; when defined, the NFS_PCNFSD_DFLTPRTOPT parameter
    overrides any or all of the default values

** If you use the l+ option (passall), TCPware ignores all the other options

The following steps show the procedure and syntax for setting up an NFS_PCNFSD_DFLTPRTOPT parameter using several of the available print options:

1   Edit the TCPWARE_SPECIFIC:[TCPWARE]TCPWARE_CONFIGURE.COM file.

2   Find the line containing the definition for the NFS_PCNFSD_DFLTPRTOPT parameter in the code. It originally appears in the TCPWARE_CONFIGURE.COM file as follows:

$ NFS_PCNFSD_DFLTPRTOPT == ""

3   Modify the line to specify the desired print options in the quotation marks. The syntax is as in the following example:

$ NFS_PCNFSD_DFLTPRTOPT == "h+t+rl"

4   Shut down and restart NFS as follows:

$ @TCPWARE:SHUTNET NFS
$ @TCPWARE:STARTNET NFS

This example submits the print job using a page header, a file trailer, and STREAMLF record format type. This example also uses the remaining print option defaults.

The PC-NFS client can further override the print options default values. These print options specified by the PC-NFS Client are only relevant for the filename specified in the PC-NFS client request packet.

Break-in Security

PCNFSD uses the OpenVMS Intrusion database to store intrusion records, unless disabled during NFS-OpenVMS Server configuration. When the PC sends an invalid user authentication request to the NFS Server, the Server checks the Intrusion (break-in) database. The database indicates the number of invalid mount requests that exceeds the threshold set for detecting break-in attempts.

If the NFS Server reaches the threshold number of invalid mount requests, it logs this as an attempted break-in. This locks out the PC until you remove the intrusion record or through other ways described in HP's Guide to System Security.

You can show intrusions by using the SHOW INTRUSION/OLD command at the DCL level. You can then remove any offending entries by using the DELETE/INTRUSION_RECORD source command on the DCL level. The source parameter is the remote device or system where the user tries to log in. (Both commands require the SECURITY privilege.)

See the SHOW INTRUSION/OLD and DELETE/INTRUSION_RECORD commands in HP's VMS DCL Dictionary for details.

Troubleshooting

If you are experiencing network communication-related problems on the NFS-OpenVMS Server, please check the following items:

1   Make sure TCPware is running on the OpenVMS system.

2   Make sure the Server is running. If not, start it by entering the following command at the DCL prompt:

@TCPWARE:STARTNET NFS

If the Server is not running but was started, examine the TCPWARE:NFSSERVER.LOG file. This file contains information to help you isolate problems with the Server. After correcting any problems that were reported in the log file, restart the Server.

3   To verify general connectivity between the two systems, try using FTP or TELNET, if purchased and installed on your system. For example, try to open a TELNET connection with the remote host in question. If another TCPware product is not available on your system, try using the TCPware PING utility.

4   Verify the internet addresses the local host and the remote hosts are using. If your local network includes a gateway, also verify the gateway address.

If you are experiencing problems performing NFS operations from a NFS client, check the Server's TCPWARE:NFSSERVER.LOG file. It may contain messages that can help isolate the problem. A new NFSSERVER.LOG file may be created by typing NETCU SET LOG/NEW/NFS.

Certain messages can also come up with the NETCU SHOW EXPORT, SHOW MOUNT, and UNMOUNT commands.

Access error messages help by entering HELP TCPWARE MESSAGES.