| arffviewer.sh |
usage: arffviewer.sh [-h] starts the ArffViewer -h this help |
| attributes.sh |
usage: attributes.sh -f <filename> | -d <dirname> [-o <filename>] [-t] [-h]
Retrieves the number of instances from ARFF files
-h this help
-f <filename>
the file to check
-d <dirname>
checks all ARFF files in the given directory
default: ./scripts/../tmp/
-o <filename>
the name of the file to store the output in
-t whether to organize the data for one dataset in a single line,
otherwise the count is printed for each file |
| change_property.sh |
usage: change_property.sh -p <property> -v <newvalue> [-f <filename>] [-d <dir>]
[-b] [-h]
changes a specific property of all ANT files, if '-f <name>' is given
only the property in this file
-h this help
-p <property>
the name of the property to change the value for
-v <newvalue>
the new value of the property
-f <filename>
if specified only this file is modified and not all
-d <dir>
the directory to look in for ANT files
default: ./scripts/../xml
-b performs a backup of the original file, i.e. creating a copy
the extension '.original' |
| check_ant.sh |
usage: check_ant.sh [-h] Performs some checks on the ANT XML files, e.g. whether the targets in ./scripts/../xml/database.xml also appear in the other files. -h this help |
| client.sh |
usage: client.sh -H <hostname> -P <port> -p <port> -o <output> [-h]
-J <Java-home> -A <ANT-home>
starts the JobClient - the default parameters are listed below.
-h this help
-p the port on which the client should listen (optional)
default: 31416
-H the hostname/IP of the server to connect to (optional)
default: localhost
-P the port on which the server listens (optional)
default: 31415
-o the directory where to store the logfiles
default: ./scripts/../tmp
-v classname(s) (comma separated list) for which to switch verbose
mode on (optional)
default:
-A the location of ANT (above 'bin')
default: /research/fracpete/programs/ant
-J the location of Java (above 'bin')
default: /research/fracpete/programs/jdk-1.5/ |
| copy_arff.sh |
usage: copy_arff.sh -s <src-dir> -d <dest-dir> -m <dir> -r <dir> -D -c [-h]
copies arff files to a destination directory (only the data files!)
it also divides them into Multi-Instance and RELAGGS files.
-h this help
-s <src-dir>
the directory where the ARFF files are right now
default: ./scripts/../tmp
-d <dest-dir>
the directory where the ARFF files are to be stored
default: ./scripts/../tmp/arff
-m <dir>
the sub-directory for the Multi-Instance files
default: Multi-Instance
-r <dir>
the sub-directory for the RELAGGS files
default: RELAGGS
-D whether to delete the destination dir
default: no
-c compress the files with gzip |
| datasets.sh |
usage: datasets.sh -i <datasets> -v [-h]
traverses all README.DATA and stores the data in the README file in
the 'datasets' directory (based on README.skel)
-h this help
-i <datasets>
the directory with the datasets
default: |
| documentation.sh |
usage: documentation.sh -o <output-dir> -t <type> -d [-h]
generates text or html documentation from all scripts and xml/help.xml
-h this help
-o <output-dir>
the directory where to store the documentation
(it is automatically created if non-existing)
default: ./scripts/../doc
-t <type>
the type of output, either 'html' or 'text'
default: html
-d deletes the output dir first |
| dump.sh |
usage: dump.sh -o <output-dir> -H <hostname> -P <port> -u <user>
-p <password> [-c] [-h]
Performs a database dump of a MySQL database
-h this help
-o <output-dir>
the directory where to store the dump files
default: ./scripts/../tmp
-H <hostname>
the host where the databases reside
default: localhost
-P <port>
the port the server listens on
default: 3306
-u <user>
the name of the user
default: peter
-p <password>
the password of the user
default: peter
-d <database>
a specific database to dump (empty means all!)
default:
-c create CSV files instead of SQL-statements |
| evaluate.sh |
usage: evaluate.sh -i <input-dir> -o <output-dir> -n <filename> [-h]
[-p] [-e] [-r] [-t]
creates CSV-files and LaTeX-Tables (german/US-american) from the generated
ARFF stat files
-h this help
-i <input-dir>
the directory where the ARFF files are located
default: ./scripts/../tmp
-o <output-dir>
the directory where to store the CSV/LaTeX files
default: ./scripts/../tmp
-n <filename>
the filename of the output (CSV/LaTeX)
default: experiments
-p include parameters in file
-e check only for exceptions
-r check only for runtime
-t check only for treesize |
| filter.sh |
usage: filter.sh -f <filtertype> -l <logfile> -a <additional> [-h]
filters logfiles after certain criteria
-h this help
-f <filertype>
Current type(s):
sql filters Create/Join/Alter/Drop statements
host filters all entries from the given host
the hostname has to be provided with '-a'
grep does a simple grep for the string provided
with '-a'
-l <logfile>
the logfile to use
-a <additional>
additional parameters, depending on the filtertype |
| foreign_keys.sh |
usage: foreign_keys.sh -F|-N [-b] [-h] Sets the discovery of relations either to foreign keys of based on column names. -h this help -F use foreign keys for discovery -N use column names for discovery -b backup the files |
| info_client.sh |
usage: info_client.sh -o <filename> [-h]
Retrieves information about the client and stores it in a text file
-h this help
-o the file where to store the info
default: ./scripts/../tmp/info_client.txt |
| info_clients.sh |
usage: info_clients.sh -H <host-file> -o <filename> [-h] Retrieves information from clients For ssh-agent see: http://mah.everybody.org/docs/ssh The clients file is a simple text file with a hostname in each line. E.g.: somehost anotherone -h this help -H the name of the file containing the hostnames for the clients default: ./scripts/../tmp/clients -o <filename> the file where to store the output default: ./scripts/../tmp/info_clients.txt |
| install.sh |
usage: install.sh -d <dest-dir> -H <db-host> -P <db-port> -u <db-user>
-p <db-password> -a -t <type> [-h]
performs the installation of Proper
-h this help
-d <dest-dir>
where to install Proper to
default: /home/fracpete/proper
-H <db-host>
the Datbase host
default: localhost
-P <db-port>
the port the DB-Server listens on
default: 3306
-u <db-user>
the user for accessing the DB
default: fracpete
-p <db-password>
the password of the user
default: fracpete
-a performs automatic install without any interaction
-t <type>
the database type, can be either 'mysql' or 'postgresql'
default: mysql |
| jobs.sh |
usage: jobs.sh -d <dataset> -o <output-file> -H <host> -P <port> [-h]
greps the jobs from ANT XML files and writes them to a file
-h to display this help
-d <dataset>
the dataset(s) to get the jobs from (comma separated list)
default: ./scripts/../xml
-o <output-file>
the file to store the jobs in
default: ./scripts/../tmp/jobs
-H <hostname>
the hostname of the jdbc url for the database server
default: localhost
-P <port>
the port of the jdbc url for the database server
default: 3306 |
| mysql.sh |
usage: mysql.sh -H <hostname> -P <port> -u <user> -p <password> -b [-h]
changes the hostname, port, user and password for MySQL in the ANT files
-h this help
-H the hostname where the MySQL server resides
default: localhost
-P the port on which the MySQL server listens
default: 3306
-u the user with which to connect to the DB
default: peter
-p the password of the user
default: peter
-b backup the files |
| postgresql.sh |
usage: postgresql.sh -H <hostname> -P <port> -u <user> -p <password> -b [-h]
changes the hostname, port, user and password for PostgreSQL in the ANT files
-h this help
-H the hostname where the PostgreSQL server resides
default: localhost
-P the port on which the PostgreSQL server listens
default: 5432
-u the user with which to connect to the DB
default: peter
-p the password of the user
default: peter
-b backup the files |
| proper.sh |
usage: proper.sh [-c <classname>] [-p <parameter>] [-m <memory>] [-h]
starts the main GUI if no specific class is specified
-h this help
-c <classname>
the class to run instead of the main GUI (additional parameters
can be specified with '-p')
-p <parameters>
additional parameters for other classes besides the main GUI
-m <memory>
amount of memory for the VM, e.g. 800m |
| run_jobs.sh |
usage: run_jobs.sh -j <job-file> [-h]
Runs the jobs generateed for the distributed experiments on a single
machine.
-h this help
-j the name of the jobfile to execute
default: ./scripts/../tmp/jobs |
| scripts.sh |
usage: scripts.sh -i <interpreter> -b [-h]
changes the interpreter of the scripts
-h this help
-i the interpreter to use in the script
default: \/bin\/bash
you use: /bin/bash
-b backup the files |
| server.sh |
usage: server.sh -j <filename> -r <filename> -p <port> -o <dir> [-h]
-J <Java-home> -A <ANT-home>
starts the JobServer
-h this help
-j the filename of the jobs (optional)
default: ./scripts/../tmp/jobs
-r the file where to store the results (optional)
default: ./scripts/../tmp/results.31415
-o the directory to store received files in
default: ./scripts/../tmp
-p the port on which the server should listen (optional)
default: 31415
-v classname(s) (comma separated list) for which to switch verbose
mode on (optional)
default:
-A the location of ANT (above 'bin')
default: /research/fracpete/programs/ant
-J the location of Java (above 'bin')
default: /research/fracpete/programs/jdk-1.5/ |
| start_clients.sh |
usage: start_clients.sh -H <host-file> -S <server> -P <server-port>
-p <client-port> -o <output-dir> [-h]
Starts the JobClients via 'client.sh' and ssh-agent
For ssh-agent see: http://mah.everybody.org/docs/ssh
The clients file is a simple text file with a hostname in each line.
E.g.:
somehost
anotherone
Each entry can have its own parameters, i.e. just list the options
from the command line after a colon ";" in the clients file. E.g.:
somehost;-A /some/dir/ant
Note: the options -h, -H are not available!
-h this help
-H the name of the file containing the hostnames for the clients
default: ./scripts/../tmp/clients
-S the hostname/IP of the server to connect to (optional)
default: localhost
-P the port on which the server listens (optional)
default: 31415
-p the port on which the client should listen (optional)
default: 31416
-o the directory where to store the logfiles
default: ./scripts/../tmp |
| start_servers.sh |
usage: start_servers.sh -H <host-file> -j <filename> -f <filename>
-p <client-port> -o <output-dir> [-h]
starts the JobServers via 'server.sh' and ssh-agent
for ssh-agent see: http://mah.everybody.org/docs/ssh
The servers file is a simple text file with a hostname in each line.
E.g.:
somehost
anotherone
Each entry can have its own parameters, i.e. just list the options
from the command line after a colon ";" in the clients file. E.g.:
somehost;-A /some/dir/ant
Note: the options -h, -H are not available!
-h this help
-H the name of the file containing the hostnames for the clients
default: ./scripts/../tmp/servers
-j the filename containing the jobs
default: ./scripts/jobs
-r the file where to store the results to
default: ./scripts/../tmp/results.txt
-p the port on which the client should listen (optional)
default: 31415
-o the directory where to store the logfiles
default: ./scripts/../tmp |
| stop_clients.sh |
usage: stop_clients.sh -H <host-file> [-h] stops the JobClients via 'kill' and ssh-agent for ssh-agent see: http://mah.everybody.org/docs/ssh -h this help -H the name of the file containing the hostnames for the clients default: ./scripts/../tmp/clients |
| stop_servers.sh |
usage: stop_servers.sh -H <host-file> [-h] starts the JobServers via 'kill' and ssh-agent for ssh-agent see: http://mah.everybody.org/docs/ssh -h this help -H the name of the file containing the hostnames for the servers default: ./scripts/../tmp/servers |
Last updated: Mon Oct 24 19:34:54 NZDT 2005