Tags: scripts

sslrsh: Remote Shell over SSL using certificate authentication

sslrsh, which stands for SSL Remote Shell, allows you to log in to a remote system over an SSL connection, using X.509 certificates for encryption and for authentication. It's similar to SSH. sslrsh is a shell script. Most of the heavy lifting in the script is done by socat. The same script can run as both client and server.

This script signals a return to my favorite subject, tunneling. My last discussion on this subject got a bit out of hand. My last useful discussion on this subject was based on SSH, and was unique in that it worked without needing root privileges on the remote side of the tunnel. sslrsh is not actually a tunneling tool. It's a remote shell tool. But it's a good introduction for future posts that will use some of these same tools to set up VPN-style tunnels. Before I wrap up this trip through memory lane and get to the point, I want to remind you about mini_ca. We'll be needing some certificate action for this script, and for that we need a Certificate Authority. You can use mini_ca to generate the needed certificates, or you can be difficult and get them some other way.

Here's the usage & license statement:

BEGIN USAGE

Usage: sslrsh [-h]
sslrsh [-p port] [-P proxy:port] [-c /path/to/cert] [-a /path/to/cacert] remotehost
sslrsh -s [-p port] [-c /path/to/certificate] [-a /path/to/cacertificate] [ -e shell commands to execute ]

-h This helpful information.
-p Port to use (currently 1479).
-P CONNECT proxy server to use. http_proxy environment variable will be used if set, but will be overridden by this flag.
-c Path to the client or server certificate (must include key)
(currently "sslrsh-client.pem" for client, "sslrsh-server.pem" for server)
-a Path to the signing Certificate Authority's certificate
(currently "sslrsh-cacert.pem")
remotehost System to connect to (client mode)
-s Listen for connections (server mode)
-e Shell command or commands to execute as the server, defaults to "echo Welcome to sslrsh on argentina; /bin/bash --login -i"

sslrsh is copyright 2009 by Jeremy D. Impson <jdimpson@acm.org>.
Licensed under the Apache License, Version 2.0 (the License); you
may not use this file except in compliance with the License. You may
obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
END USAGE

sslrsh needs three files: a Certificate Authority (CA) certificate. a server certificate, and a client certificate. The client and server certificates must also have their private keys embedded within them. The CA certificate must have issued both the server and client certificates. Actually, the server instance of sslrsh needs to have the CA certificate that signed the client's certificate, and the client instance needs to have the CA certificate that signed the server's certificate. Got that? Good.

You can specify the CA Certificate with "-a", and the server/client certificate with "-c". By default the port is 1479, which can be changed with "-p". "-P" will let you specify a web CONNECT proxy for the client. If you want to run as the server use "-s", and supply a remote host destination if you want to run as a client. The server will run a bash shell by default, but you can change what it runs through the "-e" command. NOTE NOTE NOTE: the value specified with "-e" is passed to the system() call, which can have severe security repercussions, especially if in your own script that calls sslrsh, you pass unverified data as the value for "-e". (Hmm. A Taint Mode for the shell would be pretty cool.) Finally, "-h" gets the help/usage and license statement.

Here's an example. It runs the server on host "artoo", and the client on host "argentina", connecting to "artoo" using sslrsh:

BEGIN EXAMPLE
On the server

jdimpson@artoo:~$ sslrsh -s
Listening on "1479"
And on the client

jdimpson@argentina:~$ sslrsh artoo
Connecting to "artoo:1479"
welcome to sslrsh
jdimpson@artoo:~$
END EXAMPLE

Here's the code:

BEGIN CODECollapse )END CODE

You can download sslrsh here: http://impson.tzo.com/~jdimpson/bin/sslrsh.

Let's dive in. After setting default values, processing the command line, and providing a usage and license statement, sslrsh gets its certificate material in order. It makes sure the CA certificate is readable, and does the same for the client or server (depending on which mode it runs in) combined certificate and key file. By default, it looks for files in the current environment, with names that just happen to match up with the file names you'd get if you followed the directions below to create them with mini_ca. (What a happy coincidence.) Otherwise, you can use the "-a" flag to direct sslrsh to the CA cert you want. Similarly, the client or server cert can be set with "-c".

Then it figures out what kind of proxying, if any, should be done. If the user has the http_proxy environment variable set, that will be used. If the user specified the "-P" flag, the provided value will be used as the proxy. Regardless of which source the proxy setting comes from, it gets scrubbed and parsed. First, any URL-related text (e.g. "http://") is removed. If it matches the form "server:port", the port is stripped out and assigned to another variable.

With all that out of the way, the script proceeds to figure out if it's meant to run as a client or a server. If as a client, it then checks the proxy settings. If present, the script forks (via the ampersand) an instance of socat that listens on a local TCP port and forwards anything sent to that port on to the specified web CONNECT proxy. It tells the proxy to redirect the connection to the final destination, as given on the command line. This is effectively a proxy to the proxy, because socat's SSL functionality doesn't know how to talk to a web proxy directly.

Then the script executes socat, listening on standard input, connecting it to an SSL socket. The "-,raw,echo=0" argument to socats says: listen on standard input, turn off all processing that the TTY layer normally does, and similarly tell it not to echo input typed by the user back to the user. This is important as we want the server side to receive everything we type, and to present us with everything on the screen. The argument that starts with "SSL:..." controls SSL connection. If no proxy was configured, the SSL connection will connect to the final destination as given on the command line. If there is a proxy, the SSL connection will connect to the listening port of the above described socat instance. Either way, the SSL connection uses both the CA certificate (for authenticating the server), and the client certificate (to present to the server). The rest of the argument uses a number of options to control the SSL connection. The "-ls" and "lp" arguments control how socat performs logging.

It's unfortunate that we must a second instance of socat just to perform this proxying; it would be preferable if the SSL capability within socat could utilize the proxy directly. Apparantly socat version 2.0 will be able to rectify this situation.

If running as a server, sslrsh again executes socat, using it to create a listening SSL socket, then fork and exec a shell command, which by default prints a welcome message then runs an instance of the bash shell. The argument to socat that begins "SSL-L:..." tells it to listen for incoming SSL connections (from the client). Every new connection causes the process to fork and run a shell command, as described in the argument that starts "SYSTEM:...". The rest of that argument has a bunch of options ("pty,setsid,setpgid,ctty") that are some Unix/POSIX voodoo necessary to give the client the appropriate interactive shell experience with its own TTY and job control, while "stderr" makes sure the error from the shell command gets sent to the client. The "-ls" and "lp" arguments control how socat performs logging. The two "-d" flags increase the verbosity level of the logging.

Although I think sslrsh is a neat script, and it has its uses as a lightweight and customizable remote access server, be aware of its limitations. It doesn't do tunneling/port forwarding like SSH does. It may violate the access policy of the system you're running the server on (if you aren't its administrator). It doesn't have robust error checking, especially in the set up of the proxy process, so it's not an easily supportable, enterprise-quality service.

Also, it doesn't matter which client certificate you use to connect to the server, the server will authenticate you as long as the CA created the client certificate. There's no differentiation between clients. A "normal" SSL application would actually read a certificate after validating it. We're not doing that here. It would be nice if the contents of the validated certificate were made available to our server. One way to do this would be to have an option to the SSL function that runs a script after a cert is validated, and is fed the certificate on input or in the environment. The script would return true or false to specify whether the certificate should be accepted. Or, if the SSL function placed the validated certificate, or even just the "Subject" line in the certificate, into an environment variable, our shell (as specified by "-e") could use it to make decisions.

But probably the biggest limitation is that, by default, when you log in to the sslrsh server, the shell you get will be running as the user who started the sslrsh server. But, here's an alternative way to run server which makes it prompt for username and password (in addition to the certificate-base authentication). However, for this to work, the server has to be run as root. It works by replacing the call to the bash shell with a call to the login program. login prompts for username and password, checks them against the server system's password mechanism, then uses setuid() to become whatever username was provided.

BEGIN EXAMPLE
On the server

jdimpson@artoo:~$ sudo ./sslrsh -s -e "/bin/login"
Listening on "1479"
On the client, you can see the change

jdimpson@argentina:~$ sslrsh artoo
Connecting to "artoo:1479"
via "localhost:8888" proxy
artoo login: sysadmin
Password:
Last login: Wed Nov 26 09:34:29 EST 2008 from argentina.apt.net on pts/8
Linux artoo 2.6.24-22-generic #1 SMP Mon Nov 24 18:32:42 UTC 2008 i686

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To access official Ubuntu documentation, please visit:
http://help.ubuntu.com/
You have mail.
sysadmin@artoo:~$

END EXAMPLE

A note about socat: Its man page describes it like this:

socat - Multipurpose relay (SOcket CAT)

Socat is a command line based utility that establishes two bidirectional
byte streams and transfers data between them. Because the streams can be
constructed from a large set of different types of data sinks and sources
(see address types), and because lots of address options may be applied to
the streams, socat can be used for many different purposes. It might be
one of the tools that one ‘has already needed´.

That's accurate, but doesn't really capture the scope of socat's capability. Just like its namesake cat, socat is all about connecting standard input to standard output. Unlike cat, which primarily operates on files and TTYs, socat can operate on (and create, if necessary) files, TTYs, TCP sockets, UDP sockets, raw sockets, Domain sockets, raw IP frames, child processes, named pipes, arbitrary file handles, as well as files and standard input, output, and error. Additionally, socat knows about a few application-level protocols like SSL, Web CONNECT Proxy, and SOCKS. It also knows about various network access methods, like IPv4, IPv6, IP Multicast, IP Broadcast, and datagram- and session-style transmit and receive. All of these communication mechanism are called addresses, and socat has a rich set of options that apply to numerous address types, such as setting device-specific ioctl()s, IP headers, socket options, and security ciphers, and performing functions like fork(), chroot(), and setuid() in order to get various security and performance behaviours.

Folks who know about netcat might wonder how it compares to socat. Given socat, you don't need netcat, although netcat is a lot simpler to use if all you need is basic TCP or UDP streaming. Personally, I plan on keeping netcat in my own personal toolbox, if only because I can bang out a netcat command line without any conscious thought.

A note on certificates: As mentioned above, the client and server need certificates & keys, and they both need access to a Certificate Authority certificate in order validate each other's certificates. Here's how to use mini_ca to create the necessary certs. (Follow the link to find out where to download mini_ca.)

Create the Certificate Authority, and the CA certificate, like this:

BEGIN EXAMPLECollapse )END EXAMPLE

You'll find the CA certificate at "sslrsh-ca/sslrsh-cacert.pem".

Use the CA to create a server certificate/key.

BEGIN EXAMPLECollapse )END EXAMPLE

You'll find the server certificate/key at "sslrsh-ca/certs/sslrsh-server.pem ".

Use the CA to create a client certificate/key.

BEGIN EXAMPLECollapse )END EXAMPLE

You'll find the client certificate/key at "sslrsh-ca/certs/sslrsh-client.pem ".

using flock to protect critical sections in shell scripts

This isn't about a shell script, it's about a really cool technique to apply in shell scripts. Have you ever been worried about multiple instances of a shell script running because they might overwrite or corrupt the data or devices they are working on? Here's a way to prevent that.

There's a Unix system call named flock(2). It's used to apply advisory locks to open files. Without exhausting the subject, it can be used to synchronize access to resources across multiple running processes. Note that I said access to resources, not just access to files. While flock(2) does solely act on files (actually, on file handles), the file itself need not be the resource to which access is being controlled. Instead, the file can be used as a semaphore to control access to a critical section, in which any resource can be accessed without concurrency concerns. (Howdja like that pair of sentences?) Note that flock(2) performs advisory locking, which is another way of saying that all parties accessing the resource in question have to agree to abide by the locking protocol in order for it to work. That's still useful to us. flock(2) is used in this manner to protect critical sections in lots of executable programs.

It turns out that, in addition to the flock(2) system call, there is also a flock(1) command line tool. It's a simple wrapper around the flock(2) system call, making it accessible to shell scripts. It's part of the util-linux-ng package.  You can certainly use it to restrict write access to files, a reasonable thing to do in some shell scripts, but there's a more general technique to be had: flock(1) can be used as a semaphore!

I'll start with the general form, discuss a couple alternative forms, and then give some practical examples. Here's the general form, which is good for serializing access to a resource It creates a queue, such that each process waits its turn to utilize the resource:

BEGIN GENERAL FORM
#!/bin/sh

ME=`basename "$0"`;
LCK="/tmp/${ME}.LCK";
exec 8>$LCK;

flock -x 8;
echo "I'm in ($$)";
sleep 20; # XXX: Do something interesting here.
echo "I'm done ($$)";
END GENERAL FORM

Everything after the call to flock is the critical section, where you can operate on whatever resource that you need to control access to.

You may be wondering about the use of "exec". Normally "exec" in a shell script is used to turn over control of the script to some other program. But it can also be used to open a file and name a file handle for it. Normally, every script has standard input (file handle 0), standard output (file handle 1) and standard error (file handle 2) opened for it. The call "exec 8>$LCK" will open the file named in $LCK for reading, and assign it file handle 8. I picked 8 arbitrarily. The call to "flock -x 8" tells flock to exclusively lock the file referenced by file handle 8. The state of being locked lasts after the flock call, because the file handle is still valid (think of it as still in scope). That state will last until the file handle is closed, typically when the script exits.

You can see the locking in action if you run this script twice, ensuring that the second one is started before the first one finishes it's call to sleep. I do this in the following example by running the script (called flocktest0) once in the background (by using the "&" to background it), the immediately running it again. Because the script sleeps for 20 seconds, the second call will start before the first one is done (and before it's given up the lock). The output is messed up because the first call is put in the background, but then prints to output, causing it to interfere with the shell's output.

BEGIN EXAMPLE
jdimpson@argentina:~$ flocktest0 &
[1] 13978
jdimpson@argentina:~$ I'm in (13978)
flocktest0
I'm done (13978)
I'm in (13982)
I'm done (13982)
[1]+ Done flocktest0
jdimpson@argentina:~$
END EXAMPLE

Notice that the second call to flocktest0 doesn't say "I'm in (...)" until after the first call to flocktest0 says "I'm done (...)", even though the second call was started before the first call was finished.

So you can imagine a real script doing something interesting in the critical section rather than "sleep 20", and you can be sure that only one call to that script is doing it's thing at a time, even if it's invoked several times in a row, and each call will eventually get its turn.

Whereas the general form above is for serializing parallel access to some resource by creating a queue, this following alternate form is used when you want only one process to be accessing a resource (or performing some function) at a time, but you don't want to create a queue of processes. Instead, subsequent invocations will exit (or doing something else) rather than queue up. If the initial function does exit, then the next invocation will be allowed to execute. Here's the alternate form.

BEGIN ALTERNATE FORM
#!/bin/sh

ME=`basename "$0"`;
LCK="/tmp/${ME}.LCK";
exec 8>$LCK;

if flock -n -x 8; then
echo "I'm in ($$)";
sleep 20; # XXX: Do something interesting here.
echo "I'm done ($$)";
else
echo "I'm rejected ($$)";
fi
END ALTERNATE FORM

The primary difference is the "-n" flag to flock, which tells it not to block, but to exit with an error value. It's put in an if statement, which will do the the "interesting work" (call sleep in this example) in the true clause, and will report that it can't do interesting work in the false clause.

And here's what happens when I invoke it four times in rapid succession, then a fifth time after waiting 20 seconds:

BEGIN EXAMPLE
jdimpson@argentina:~$ flocktest1 &
[1] 14644
jdimpson@argentina:~$ I'm in (14644)
flocktest1
I'm rejected (14648)
jdimpson@argentina:~$ flocktest1
I'm rejected (14651)
jdimpson@argentina:~$ flocktest1
I'm rejected (14654)
jdimpson@argentina:~$ I'm done (14644)
flocktest1
I'm in (14657)
I'm done (14657)
END EXAMPLE

Again, you can imagine a real script that does something more interesting than sleep, which will benefit from the fact that only one invocation will actually do anything, and every other invocation will just exit. Or, if you wanted to create a reliable modal function by, in the false clause, sending a message (or calling "kill") to the first invocation, to make it shut down. So calling the script the first time starts the function, and calling it the second time stops the function. It won't lose track of state.

OK, maybe you need help imagining these things. Here are two realistic examples, one for the general form, and one for the alternate. Both examples center around my use of MythTV, specifically, around the mythfrontend program. MythTV is an open source PVR/DVR application, and mythfrontend is the component of MythTV that plays the videos, and with which the user directly interacts.

MythTV also comes with a simple command line tool called mythtvosd, which can be used to send messages to mythfrontend, which will write the mesage to the screen by overlaying it over the video being played. I decided it would be cool to display the sender and subject of all email that I receive. I already use procmail to process my incoming email, so it was easy to insert one procmail rule that strips out the sender and the subject and calls mythtvosd with that information, so I could see it on my TV. It's kind of like biff, for those of you who really know your historic Unix applications.

Trouble is, I tend to get two or three email messages at a time, because I use IMAP to download email from the SMTP servers via a cron job. procmail and mythtvosd are able to process all three messages faster than it takes mythfrontend to scroll the sender and subject strings across the screen. So if procmail calls mythtvosd three times in rapid succession, I will only see on the screen the results for the last email (because subsequent calls to mythtvosd have the effect of canceling the previous one). So I used the general form to create a queue, ensuring that all three emails get scrolled across my screen.

The relevant part of .procmailrc to invoke the following code is

:0c
| $HOME/bin/mythemail

And mythemail looks like this:

BEGIN CODE
#!/bin/sh
ME=`basename $0`;
(
     exec 8>/tmp/$ME.LCK;
     flock -x 8
     mythtvosd --template=scroller --scroll_text="mail from $FROM, regarding $SUBJ"
     sleep 10;
) &
END CODE

The code that sets the values of $FROM and $SUBJ has been removed; it's a complicated hack that doesn't do anything to make my point about flock.

So that I don't create a long queue that backs up all my email just to display notification on my tv, I use process control ("&" again) to background all the processes blocked on the lock file waiting their turn.

The "sleep 10" is needed because there's no way to know when the text has finished its scroll across the screen, but 10 seconds works well for me. It's actually not long enough for very long from/subject strings (and/or very wide screens), but it's enough to give a sense of how many emails have been received.

The other example has to do with my new Logitech LX710 keyboard, which has lots and lots of extra buttons for playing music and starting email clients and so forth (although half the buttons don't work under X/GNOME--showkeys sees them, but xev does not). I mapped some of the buttons to control mythfrontend. One button activates mythfrontend, one pauses the video and others forward and rewind through the video. This time the trouble is when I accidentally press the activate button more than once. Repeated presses of the on button would cause mythfrontend to start multiple times. That wasn't what I wanted. Once it is on, I don't want it to turn on again.

So I used the alternate form to make that happen. On my system, mythfrontend is already a shell script which eventually calls mythfrontend.real, so I only had to add the following code snippet to the top of the script:

BEGIN CODE
#!/bin/sh

ME=`basename $0`;
LOCK="/tmp/${ME}.LCK";
exec 8>$LOCK;

if flock -n -e 8; then :
else
echo "Can't get file lock, mythfrontend already running";
exit 1;
fi

# XXX: rest of mythfrontend...
END CODE

I left the true clause blank (just a colon), and put an exit in the false clause. This let me keep all the lock handling stuff at the top, and makes it very easy to insert into the beginning of a shell script, which is nice because I'm inserting this into a script maintained by someone else, so I'll have to re-insert it during every upgrade. I could have created my own script to contain the locking code, then have it call mythfrontend, but then anyone/anything that calls mythfrontend directly wouldn't go through the locking code, and my scheme wouldn't work.

We could make this script even more useful by doing something more interesting in the false clause than just exiting. If I wanted to make my keyboard button work like a modal on/off switch, I would have the false clause shut down the running instance of mythfrontend. However, I don't want that, because my initial problem was that I was pressing the activate button by mistake, and I wouldn't want to interrupt the video accidentally.

Some notes and limitations:

In the general form, there's no guarantee in what order the processes that are queuing up to access the resource will be served. It's probably a function of the scheduling algorithm used on your system, but will also be effected by how long each process holds the lock. Starvation is a possibility. I probably should mention that these locking techniques aren't intended to scale to high demand or for long-running processes. Use an enterprise-quality software framework for that sort of thing. These techniques, like all shell scripts (in my personal scripting philosophy) are for short lived or infrequently demanded tasks.

The locked file remains locked either until it is explicitly unlocked, or when the script holding the lock closes the file handle. "flock -u N" will explicitly unlock file handle N. Also, all shell scripts (indeed, all processes) close any file handles that remain open when they exit. Finally, a script can explicitly close file handle N by doing "exec N>&-" . I tend to design my scripts so that there's no need to explicitly close file handles or perform the unlock call, for the similar reasons of scalabas in the previous paragraph.

While I prefer using "exec" to create and name lock file handles, another alternative is to use subshells. Instead of

exec 8>lockfile;
flock -x 8;
# XXX: do interesting work here

you can do

(
flock -x 8;
# XXX: do interesting work here
) 8>lockfile;

I prefer the former, because it has less impact on the overall structure of your script and it keeps most of the lock file handling code in one place. Subshells also do funny things with variable scope, so I don't use them unless I need them. I'm not sure which one is easier to understand; they both use esoteric behaviours of the shell, specifically how you work with file handles.

For reasons I don't understand, at least with bash version 3.2.39 as compiled for ubuntu 8.04.1, you're limited to single digit file handle numbers. You should be able to to get up to 255, but I get various errors when I use anything greater than 9. While there are a few command line tools that know about more than stdin, stdout, and stderr file handles, they are rare, and knowledge of how to use file handles in the shell is rarer still, so running out of file handle numbers shouldn't be a problem.

According to POSIX, the flock(2) system call has a limitation in that it isn't required to work over remote mounted Network File Systems (NFS). The flock(1) command, being a wrapper around the system call, inherits this limitation if present for your system. There's another system call, fcntl(2), which will work over NFS. Unfortunately, I don't know of any command line way to utilize fcntl(2).

fcntl(2) also has a mandatory locking capability. But be aware that if you were to use mandatory file locking as a semaphore to control access to another, arbitrary resource, the mandatory quality is not transitive to the arbitrary resource, for the simple fact that the resource can still be accessed by a rogue process that chooses not to use the lock. It's a moot point at the moment, but something to keep in mind should someone make a fcnt(1) command line tool.

Finally, if a locked file gets deleted, subsequent lock attempts will succeed even if something is holding an old lock. So if you're protecting access to an arbitrary resource, be aware of the access control permissions of the lock file. You don't want anyone to delete the file from under you.

Philosophy: What is a script?

To me, scripting is a verb, and I define it as programming with the intention of automating a task that can be done manually, rather than with the intention of developing either a new unit of functionality or a new monolithic application.  These ideas come from the Unix Toolbox Approach, the subject of another article (hopefully).

So a script is the result of scripting as defined above, rather than the more common definition of  some code written in a scripting language.  Languages like Perl, Python, and Visual Basic are often thought of as scripting languages, and are sometimes looked down upon because of this.  These languages certainly are very conducive to scripting, because they excel at gluing other applications together, are loosely typed, and in general have a low barrier to entry.  These qualities are essential to scripting, but one can (and many do) make first class applications written in these languages.   Whether you have a script or an application depends on the intent and discipline of the programmer; the language selection itself is a result of the programmer's intent and discipline.  That's why there's a correlation between scripting tasks and specific languages.

Often, the word Interpreted gets used as part of the scripting definition, with similar scorn.  I get really annoyed by this because, at some level, all computer instructions are interpreted by something.  Whether that something is hardware or software seems irrelevant and increasingly uncertain.  With modern CPUs doing so much emulation these days, even traditional binary opcodes are being interpreted as a matter of course.  Interpreted vs Executed is no longer a significant distinction.   Java is a good example of something that is interpretted (by it's Virtual Machine).  But at one time Sun tried to sell hardware that interpretted the java bytecode as the native instruction set of the hardware.  (There still needs to be a VM to handle things like Garbage Collection.)  That ship seems to have passed due to the realities of the current CPU market, but technically it worked.  Digital Equipment Corp (bought by Compaq, bought by HP, with some intellectual property sold to Intel somewhere along the line) got its start selling Lisp machines, which were capable of executing Lisp on their processors. 

Perl and Python, traditionally thought of as interpreted, are first compiled to a parse tree, not unlike Java compiled to bytecode, before being interpretted.  In the case of Perl, all the shenanigans going on with Perl 6 and Parrot capitalizes on the VM approach, to the point where they seem to be bootstrapping an implementation of every historical version of Perl, plus Java, plus who knows what else, on several independent implementations of their Parrot VM.

My point, and I do have one, is that a script is a script because it is meant to automate something done by hand. This isn't a sharp distinction, but it is a useful one.  Good scripts are usually generalized, which helps blur the distinction.   The are similar to Orchestrations, Choreography, and Workflow/BPEL languages (as far as I understand them, at least).