Tags: scp

copysshenv: Extract the ssh-agent environment variables from a running process

Dated 2008-04-20, copysshenv extracts the ssh-agent variables from the environment of any running process.  It may only work on Linux (but might work on Solaris, and any other OS that maintains /proc/<process id>/environ containing the current environment for the process).

ssh-agent is used to cache your SSH keys in memory, so that you don't have to retype your SSH key passphrase each time you run ssh or scp.  It works by setting two environmental variables: SSH_AGENT_PID and  SSH_AUTH_SOCK.  Any process (owned by the same user who ran ssh-agent in the first place) can use the contents of these variables to communicate to the running ssh-agent process, specifically to ask it to handle a challenge from an SSH server.  There are a couple of ways to make sure these variables are in your environment (see the ssh-agent man page), but once in a while you find yourself in a situation where your current shell doesn't have the variables set.  This can happen, for instance, when you are logged in remotely to a system, and want to use the ssh-agent already on running on that system to log you in to the next system.  The shell you get when you log in to the system won't have the environmental variables set, even though ssh-agent is already running on that machine.

BEGIN EXAMPLE

jdimpson@artoo:~/bin$ copysshenv  21527
export SSH_AUTH_SOCK=/tmp/ssh-cxNeNz8550/agent.8550
export SSH_AGENT_PID=8625

END EXAMPLE

For it to work successfully, you need to determine a process that you know has the environmental variables already set.  That's a little beyond the scope of this post, but it's a safe bet that any other shell (e.g. bash) process running might have it.  In the above example, process 21527 was an instance of bash running on my Gnome desktop. 

BEGIN copysshenv

#!/bin/sh

PID=$1;
if [ -z "$PID" ];
then
        PID="$$";
fi

sed -e 's/\x00/\n/g' < /proc/$PID/environ | awk '/SSH/ { print "export " $1}'

END copysshenv

Without any arguments, copysshenv checks it's own environment for the variables, which is only useful if you are running it in a shell that already has the right variables.  (And if that were our only requirement, we could have done it like this: env | grep SSH .)  The business with setting PID to $$ makes that happen.

The last line is the main one.  It uses sed to read in the contents of /proc/<proc id>/environ, which is a null-delimited group of strings, each one a key=value pair, one for each environmental variable.  The 's/\x00/\n/g' replaces all the nulls with newlines.  (Effectively, this re-implements the env command, but for any process, not just the current one.)  awk is used to filter for only ones relevant to SSH, and the print command outputs the variables, prepending "export " in front, making the output appropriate for "source"ing, like this:

BEGIN EXAMPLE
jdimpson@artoo:~/bin$ `copysshenv  21527`
END EXAMPLE

Now if you check the current environment with env, you'll see the SSH variables have been set.

Note that since /proc/<process id>/environ is owned by the user running the process, you can only get access to the environment of processes that you own.  Similarly, ssh-agent remains secure because the SSH_AUTH_SOCK file is also owned by the user running the ssh-agent process.  So you can't get access to someone else's SSH keys via this method, unless you are root, or the user has foolishly opened up permissions on both those files.

dllargefile: Resumable rsync of large files

Before today the timestamp for dllargefile was 2006-01-13, but in preparing this article I've modified it to remove some hard coding.  This script is a wrapper around rsync to handle the download of largefiles.  It is specially designed to effeciently resume the download after being interrupted.  I used it to download some 1-2 GB movies from  a friends computer through RoadRunner cable internet service.

BEGIN dllargefile
#!/bin/bash

BWLIMIT=20

function dllargefile {
        local FILE SEDFILE HOST;
        FILE=$1;
        HOST=`echo $FILE | sed -e 's/:.*//'`;
        FILE=`echo $FILE | sed -e 's/.*://'`;
        SEDFILE=/tmp/dllargfile-sedfile
        if [ -z "$FILE" ]
        then
                echo usage $0 hostname:filename [hostname:filename ...]
                exit 1
        fi

        echo "s/\([ ']\)/\\\\\1/g" > $SEDFILE;
        QFILE=`echo "$FILE" | sed -f "$SEDFILE"`
        rm $SEDFILE;
        if [ -e "$FILE" ];
        then
                chmod u+rw "$FILE";
        fi

        rsync -v --progress --partial --bwlimit=$BWLIMIT --inplace "$HOST":"$QFILE" .
}

while [ $# -gt 0 ]; do
        F="$1";
        shift;
        dllargefile "$F";
done
exit 0;


END dllargefile 

EXAMPLE: dlllargefile "impson.tzo.com:Star Wars: The Legacy Revealed.avi"

Let' see... It rate limits the bandwidth used to 20kbps on the assumption that if the file is so big and it will take so long to download that you're resolved to it running overnight (with auto-resume after interruption), and in the mean time you'd rather not use up all your bandwidth allocation.

You invoke it similarly to how you'd invoke rsync or scp, using the 'host:file' style to specify what file to get from what host.  Only you don't specify a destination--it assumes the current directory.

The yucky call to sed with the $QFILE variable is to escape spaces and apostrophes (which show up a lot in my multimedia filenames and are difficult to escape correctly in rsync and scp).

The downloading file is made writable if it already exists because rsync usually will mark a file read-only until it's fully downloaded (after which it sets it to the original file permissions).  This is necessary should the download get interrupted and retarted.  The '--partial' flag to rsync tells it to leave the incomplete file should the download get interrupted.  Finally, the --in-place flag tells rsync not to use temporary files for the downloading file.  These three things (the chmod and the two rsync flags) let dllargefile resume after an interruption.

Something it doesn't do that it might is to detect interruptions and try again (maybe after a delay) until sucessful download.

The while loop at the bottom handles multiple host:files on the commandline.