ARC - Authenticated Remote Control

R.Többicke, CERN/CN
preliminary, incomplete Version 0.2, revised April 24, 1996


Contents

ARC

Quite often operations which require special privileges are to be performed by users who are normally not granted those privileges. The mechanism granting the necessary privileges may simply be too coarse - in the Unix environment almost every command of vaguely administrative nature requires root privilege, a privilege which besides enabling a user to perform comparatively harmless operations such as defining a new printer or adding a new user also gives him complete control over the remainder of the machine. Or the operation may be delicate enough that it should not be executed by a non-specialist without performing other operations before or after.

The Unix root privilege has another drawback: it is controlled by a single password, making it difficult to manage and to track who is privileged at any given time and who has performed what operation.

The Unix root privilege also turns out to be largely insufficient in a distributed computing environment: it gives control over the individual Unix system, but in a complex networked environment services usually span systems and implement privilege schemes which do not relate to 'local' root authority.

arc has been inspired by the sysctl facility written at IBM T.J.Watson in Yorktown Heigths, NY.

It basically works as follows:

  1. the user formulates a privileged operation concerning a service potentially not limited to his local machine and ships the request to a server;
  2. the service in question trusts the server who is going to perform the operation. This could be implemented by recording an administrator's password on the server's local disk. The point is that the user's own Unix system is not trusted, only the server is;
  3. it is assumed that the user has been authenticated by an authentication mechanism independant of his own workstation - a mechanism which the server can rely upon. arc uses Kerberos for this purpose;
  4. based upon the user's (authenticated) identity the server decides whether to refuse the user's request or to accept it and perform the operation on his behalf. The point here is that the server must be configurable or better programmable. Under sysctl the server was programmed in a special TCL dialect. arc allows the server to be programmed in any language interpreter; however, a library of predefined operations and the examples all use PERL. If the predefined operations are not needed simply using /bin/sh would work as well.

Implementation

arc is implemented as two modules, the requestor module arc (client) and the server module arcd.

The requestor is executed as a normal unix command, specifying the server's hostname, processing flags, and the operation to be executed on the server. The command's standard input is connected to the server process' standard input, making it possible to pass arbitrary data just like in a 'rsh' (remote shell) command.

The arcd server is started by the Unix inetd super-daemon for every new request. The user's requested operation is passed as parameters on the command line, and the standard input and output are connected to the client 'arc' command's input and output. Variables in the server's environment specify the Kerberos identity of the requesting user, the hostname the command is issued from, and whether the connection uses encrypted or clear-text transfer (stdin/stdout). The specified script is executed under the Unix uid of 0.

In case AFS support has been compiled into ARC, a setpag() is executed before starting up the interpreter. This way any server code can safely request privileged AFS tokens which will not be inherited by subsequent invocations.

In the standard configuration the server starts a PERL interpreter with a standard initialization script. The functionality described hereafter applies only when this script is used.

Operations and Access Control

Operations (arc-subcommands) are represented by files of the form <subcommand>.arc in standard directories. The list of directopries searched is coded into the initial script as the contents of the PERL array ARCETC.

The file is loaded into the PERL interpreter using require. The command file should do two things:

  1. Specify who is authorised to perform the operations by setting the variable $ACL{<subcommand>} to ANY (anybody), AUTH (anybody who is authenticated in the local cell/realm), ACL (only with ACL privilege), or a filename which is interpreted as relative to any of the directories in $ARCETC. The file contains a list of Kerberos principals allowed to executed the operation. An extension '.acl' is automatically appended.
  2. define a PERL procedure <subcommand>. In case the access control tests pass the procedure is executed with all the remaining command line arguments passed as a single string.

In case no <subcommand>.arc procedure is found, and if the issuer has the ACL privilege, the complete command is executed as a PERL command (using eval) under root privilege. A simple way to reboot a remote machine would therefore be

	arc -h <remote-node> system reboot

Environment variables, standard subcommands and auxiliary routines

Upon invocation of the command procedure the following environment variables are defined:

A few subcommands have been predefined:

The following subroutines are available for calling from command processing subroutines.

and:

Disk space management

Privileged AFS commands, mainly called by afs_admin: acl fs mkdir pv pv-list vos

operations on normally read-only directories

Batch Job Token Extension

Overview

The batch job token extension works basically as follows

  1. upon job submission, the user's current AFS token is attached to the batch job (e.g. as a comment line in the job script). The token, the current Unix uid and a timestamp are encrypted using a public-key algorithm allowing encryption by anyone but decryption only by a privileged process knowing the private key;
  2. when the job starts, a setuid root program extracts the encrypted triple and passes it to a token extension server using arc, together with the requestors Unix uid;
  3. the token extension server decrypts the (token,uid,timestamp) triple. If the callers' Unix uid matches and the timestamp is in an allowable range, the AFS token is decrypted using the AFS encryption key; the start- and endtimes are adjusted to allow for maximum lifetime and the token is re-encrypted in the AFS encryption key;
  4. then newly created AFS token is passed to the waiting job over the current arc session (which must be in encrypted mode) where it is placed into the kernel token cache.
Security concerns:

Prerequisites

Implementation for LoadLeveler

llsubmit

The llsubmit command has to be modified in order to include the current PGP-encrypted AFS token in the job. The token is identified by a '##BATCHKAUTH=' line near the beginning of the jobs script. This implies that the job must be a script, submitting a module (e.g. /bin/ls) will not work.

The modification can either by done by creating a front-end which copies the job into a temporary file and then 'llsubmit's it or by using a submitfilter exit. The filter extracts the current AFS Token using GetToken and encrypts it using PGP.

Job startup

There are at least two places in LoadLeveler where the hook to activate the token can be placed.

The most convenient place is the llcheckpriv module. This program is called before the job output files are opened. The current job script resides in the current directory. We replace llcheckpriv by a shell script that calls the setuid program batchauth and then calls the original llcheckpriv program. batchauth reads the job file and extracts the PGP-encrypted token, and passes it to the token extension server. If all goes well, it comes back with a valid AFS token in the Kernel cache from where on the original llcheckpriv module will faithfully do its business.

The official place where sub an exit should be placed is in the job prolog (JOB_PROLOG config variable). However, on some architectures (e.g. AIX) LoadLeveler has built-in AFS support without token extension. On AIX the AFS token is set up after the job prolog is run, which means that the freshly created long-lived token is promptly overwritten by the (usually expired) user's token upon the time of submit.

LoadL_starter

The LoadL_starter has to be run in a PAG (e.g. using pagsh), otherwise the token will end up Unix UID-based. In that case an 'unlog' issued by a user logged into the same machine as his job would unlog the job as well.

Cron jobs

arc has been used to implement the acrontab command, which allows scheduling of cron jobs with valid AFS tokens.

Cron entries are kept on a central server, whereas the commands are executed on a machine of the user's choice. The format of the crontab entries are the usual SysV (to be precise: AIX) ones, except that the command MUST start with the IP node name of the node the command is to be executed on. This looks almost like an 'rsh' with a hostname alias, e.g.:

18 12 * * *  rsrtb /usr/afsws/bin/tokens; ls -l; env
executes 'tokens', 'ls' and 'env' on rsrtb every day at 12:18 (not very useful by the way). The output of the command is returned as a mail message to wherever the user's .forward file (in AFS) points to, default is userid@afsmail.

The command syntax is similar to the normal System V crontab command:

Prerequisites:

  1. in order to use 'acrontab', the user must hold a valid Kerberos ticket, either through 'login' or 'klog.krb'.
  2. the node on which the command is to be executed must have 'arc' configured: easiest is to run (as root) '/afs/usr/sue/etc/sue.install arcd', which internally installs 2 other prerequisites 'arc' & 'srvtab'.
  3. cron jobs are subject to normal cron restrictions as imposed by cron.allow and cron.deny files in the directory applicable to the target system (and carefully chosen to differ from system to system). The semantics of the files are the AIX ones, but there are no big surprises.

How it works

User's cron table entries are passed to a central server using an authenticated arc connection. The server creates entries for the internal acron command which is passed the user's Kerberos principal and the target node name. The user's command(s) and all parameters are passed on stdin.

acron is started by cron as root on the server. It creates an AFS token for the user based on the passed Kerberos principal and opens an authenticated encrypted arc connection to the target node where it invokes the arc subcommand acron and passes the user's token, command and arguments.

The acron subcommand switches to the user's account and home directory, reads the AFS token over the (encrypted) communications channel and starts a shell to execute the command. Command results are sent back to the server who collects them in a temporary file which is mailed to the user unless empty.

About this document ...

ARC - Authenticated Remote Control

This document was generated using the LaTeX2HTML translator Version 96.1-e (April 9, 1996) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html -split 0 arc.tex.

The translation was initiated by Rainer TOBBICKE on Wed Apr 24 18:35:49 METDST 1996

Further updates made by straight edit of the resulting file. Last updated Jan 15th, 2004


Rainer TOBBICKE
Wed Apr 24 18:35:49 METDST 1996