After SOLAR has been installed and you have registered
to use it, you should be able to run it
by entering the command solar
to your Unix
shell prompt (assumed to be %
in the
example below). SOLAR returns with some introductory
information and a solar>
prompt to which
you can enter SOLAR commands (and also Tcl commands and most
Unix or linux commands). After processing each
command, SOLAR returns another solar>
prompt. (If you have not yet registered SOLAR, you will get
instructions on how to register. Each individual SOLAR user
must register. Registration was discussed in Section 2.3.)
In the following examples, you are expected to enter the
bolded commands
following the prompts.
Other commands are shown for illustration purposes only, and
the lighter text is a shortened version of the program output you
should see. Some of the commands in this chapter are also
links which take you to the more complete documentation for
that command.
% solar
SOLAR version x.x.x, compiled on Xxx xx xxxx at xx:xx:xx
Copyright (c) xxxx Southwest Foundation for Biomedical Research
Enter help for help, exit to exit, doc to browse documentation.
solar> help
[...help summary shown. Press space to scroll or q to end...]
solar>
The help
command
given by itself shows a list of all the SOLAR commands
(not including Tcl and Unix commands) along with
a short description of each. To quit help before
viewing the entire message, type
q
. You can also use help to get a
complete description of any particular command, or use the usage command to get a
short summary of the command options which stays in your
terminal window afterwards. Or, you may prefer to bring up an
html browser window to view the documentation using the doc command. All this
was discussed previously in Section 1.3.2.
Before starting with the example, you should create a new subdirectory and move to it. (Soon, the directory will be filled up with files created by SOLAR!)
solar>exit
%mkdir example
%cd example
%solar
SOLAR version x.x.x, compiled on Xxx xx xxxx at xx:xx:xx Copyright (c) xxxx Southwest Foundation for Biomedical Research Enter help for help, exit to exit, doc to browse documentation. solar>
Then, you can copy the SOLAR example files to your current
directory using the example
command. Then
you can use the Unix ls
command to view
the files in your directory:
solar>example
Copying example files to current directory solar>ls
README gaw10.phen map9 tclIndex doanalysis.tcl makemibd.tcl mrk10 gaw10.ped map10 mrk9
The example files include the following:
gaw10.ped .......... pedigree file gaw10.phen ......... phenotypes file mrk9 ............... marker file for chromosome 9 mrk10 .............. marker file for chromosome 10 map9 ............... map file for chromosome 9 map10 .............. map file for chromosome 10 makemibd.tcl ....... script to make IBD and MIBD files doanalysis.tcl ..... script to do quantitative genetic, twopoint, and multipoint analyses
The example data here are simulated data from Genetic Analysis Workshop 10 (GAW10). GAW10 was supported in part by grant GM31575 from the National Institutes of Health. Used by permission.
The two scripts provided (makemibd and doanalysis) run through a
complete analysis similar to what is described below. You can
examine the contents of either of these scripts using the Unix
cat
(catenate and type) command. (If a
file is too large to display at one time, you may decide to
use the
more
command instead, which uses
space
to page and q
to
exit.)
solar>cat makemibd.tcl
proc makemibd {} { ... solar>cat doanalysis.tcl
proc doanalysis {} { ...
To run these scripts, you would simply enter either of their
procedure names (makemibd
or
doanalysis
) to a
solar>
prompt. Each script will run about
15 minutes or more. Skipt this for now, however, as they
simply duplicate what we are going to do step by step
in the following sections, though in a different order.
When starting up, SOLAR finds all the script files (with the
.tcl
extension) in your working directory
and adds the procedures found in them to the list of available
commands. If you add or change scripts after starting solar,
use the newtcl
command to include them as well. A file created by SOLAR
named
tclIndex
indexes the script procedures
found in the local directory.
The first step before doing any kind of operation in SOLAR is
to load the pedigree file. The formats for all files are
described in the next chapter. You can also view the pedigree
file requirements by entering the SOLAR command
file-pedigree
.
solar> load pedigree gaw10.ped
After this has completed (it may take a few seconds), you may either begin building IBD and MIBD files for linkage analyses or you may proceed directly to performing a simple quantitative genetic analysis (which might only take a few minutes and for that reason is described next).
You must work with only one pedigree data set within a particular working directory. Once you have loaded a pedigree file in a particular directory, that pedigree will be already "loaded" next time you start SOLAR in that same directory, so you will not need to "load" it again. The pedigree loading command creates several files in the working directory with all the information needed by SOLAR about the pedigree in a format SOLAR can access quickly. If you load different pedigrees in the same working directory, you will probably get into trouble.
If you need to change the pedigree data, even by just removing one individual from it, you should probably delete the working directory, all the files in it, and all the subdirectories, and especially all the IBD and MIBD files, because they will all become obsolete. This is discussed further in Section 8.2.
After the pedigree file has been loaded, you may load the
phenotypes file. The phenotypes file is described in the next
chapter. You can also view the phenotypes file requirements
by entering the SOLAR command file-phenotypes
.
solar> load phenotypes gaw10.phen
Once you have loaded a phenotypes file, it remains loaded until another phenotypes file is loaded in the same working directory. You can exit and re-enter SOLAR in the same directory without needing to reload the phenotypes file. However, if you modify the phenotypes file, you should load it again.
Next, you can choose any of the phenotypes to be the trait
(dependent variable) using the trait
command. Let us try the phenotype q4 for starters.
Note that SOLAR is case-insensitive regarding variable and
parameter names so you can just type them in lower case
regardless of the way they are named in your data files.
solar> trait q4
(To specify two traits, which is possible starting with SOLAR version 2, you would specify them both separated by a space.)
After selecting the trait, you can select covariates using the
covariate
command. Covariates may include sex, any phenotypes (other
than the trait), and interactions of these. Interactions are
specified using the *
sign, for example
age*sex
means "age by sex."
solar> covariate sex age age*sex
At this point you would normally use the
polygenic
or polygenic
-screen
command to automatically run a standard
quantitative genetic analysis which maximizes several models
and compares them. (Maximization is the process of finding
the set of parameter values having the highest likelihood.)
Using the polygenic
command will be
described in the next section. But for this even simpler
analysis, we will just maximize one model to see how
SOLAR maximization works.
There are many ways SOLAR models can be parameterized. The
standard parameterization we use includes parameters
e2
,
h2r
, and h2q1
.
Each of these represents a proportion of the total variance
after the effect of all covariates has been removed.
e2
is the residual non-genetic variance.
In a polygenic model, h2r
is the
total additive genetic heritability. In a
linkage model (with one or more locus specific
elements)
h2q1
represents the heritability
associated with the first locus, and
h2r
represents the residual genetic
variance. In an oligogenic model, there
may also be
h2q2
, h2q3
, etc.,
representing the variance associated with the second locus,
the third locus, and so on.
If you want to parameterize your model differently, you are
free to do that in SOLAR, but you may have to set
up the parameters and other model features yourself.
The
maximize
command will work with almost any
conceivable parameterization as long as the
mu
and omega
commands
are set up as needed for that parameterization. At this time,
models with non-standard parameters will not work with the
polygenic
command, but will work with the
twopoint
and
multipoint
commands discussed below with
the -cparm
option.
Custom Parameterization and
the -cparm option are discussed in Section 9.5.
The standard parameterization for a sporadic model is set up
(but the model is not maximized) by the
spormod
command. A sporadic model has no
genetic component because
h2r
is constrained to zero. The standard
parameterization for a polygenic model is set up with the
polymod
command. The standard
parameterization for a linkage model is set up with the
linkmod
command, however
linkmod
only works if either a sporadic or
polygenic model has already been set up.
linkmod
can either add a new linkage element (on top of whatever
others may already be present), or replace an existing linkage
element with another one. Linkage elements are associated
with IBD and MIBD matrices, as described below.
Give the following command to set up the standard parameters for a polygenic model:
solar> polymod
Now, you can find the maximum likelihood for this polygenic
model using the
maximize
command.
solar> maximize
This will run for a while (probably less than a minute) and
end up with a display of the final parameter values including
that of H2r and the natural log of the likelihood
(loglikelihood) of the model. You should get a loglikelihood
of -420.252 (or something close to that) and an H2r of 0.55,
which in this polygenic model represents the total additive
genetic heritability. During this maximization you will see a
lot of detail about the maximization process, including
parameter estimates for each interation. Normally when you
run the higher level commands such as
polygenic
or
multipoint
, this detail is suppressed, but
is often written to certain output files for later
examination.
The results of all solar commands which maximize models
(including maximize
,
polygenic
, twopoint
,
multipoint
, and
bayesavg
) are written to a specific output
subdirectory. By default, this maximization output
directory or outdir
is named by the
trait command. If you want to evaluate several sets of models
using the same trait (but different combinations of
covariates, household effects, or options) you can use the
outdir
command to set the maximization output directory to a
name of your own choosing.
There is no problem with running several SOLAR processes on the same machine, but it is recommended to run them from separate working directories. When several processes are running from the same working directory, it is hard to avoid having one process modify a file that another process is reading. Also you must never have two SOLAR processes writing to the same maximization output directory at the same time, because the results will inevitably be mixed up, or one process might crash.
Now use the Unix ls
command to show what
is in the q4 subdirectory.
solar> ls q4
last.mod solar.out
The maximization output directory will contain a file named
solar.out
, which recaps all the
maximization details which were just displayed on your
terminal. It will also contain a model named
last.mod
, which was the model PRIOR TO
maximization. (Note: if there were convergence
difficulties in maximization, and SOLAR performed several
retries to get past them, last.mod
is not guaranteed to have your last model. It might contain
some model in the middle of the retry sequence.)
The maximized model itself is not saved to a file by the
maximize command, but it remains loaded in
memory as the current model. You can
display the current model with the
model
command:
solar>model
trait q4 parameter mean = 11.43 ... parameter sd = 0.97 ... parameter e2 = 0.45 ... parameter h2r = 0.55 ... constraint e2 + h2r = 1 omega = pvar*(phi2*h2r + I*e2) # mu = \{Mean+bsex*Female+bage*(age-44.645)+*(age-44.645)*Female\}
Note that the model itself is a series of SOLAR commands. You
could use commands such as those in a model to change the
current model. Each parameter
command sets up the a parameter whose maximum
likelihood estimate is computed during maximization. You
can also use a parameter
command to
display the parameter value, or set the starting point or
boundaries of the parameter. The constraint
command sets up constraints on parameters or sums of
parameters. The omega
and
mu
commands define how the parameters, data
values, and genetic matrices work together.
In the most cases, however, you will simply let SOLAR set up and maximize models for you based on higher level commands. Sometimes you will save or load models, and sometimes it is useful to examine them.
Most of the SOLAR commands which maximize models for you will
save the maximum likelihood model for you with
descriptive names.
The
maximize
command itself does not do this.
You may save the current model into any directory, but in this
example we will save it to the output directory using the
save
model
command:
solar>save model q4/simple
solar>ls q4
last.mod simple.mod solar.out
Models are automatically saved with a .mod
extension which is assumed for SOLAR models. Later, if
you wanted to reload this model, you would use the load model
command:
solar> load model q4/simple
Normally, after setting up the trait and covariates using the
trait and covariate commands, you will run an automated
quantitative genetic analysis using the polygenic
command. To determine the significance of each covariate,
include the -screen
option. You will not
need to run the maximize command, as we did above
tutorial purposes, the polygenic
command
takes care of that. To clear out the old model currently
in memory, give the model new
command first.
solar>model new
solar>trait q4
This time we will specify the age and sex covariates with an
abbreviation for all of the five following covariates:
age sex age*sex age^2 age^2*sex
We
commonly try all of these covariates
(and since we are going to be screening covariates, the
useless ones will get kicked out anyway).
The abbreviated covariate specification for all of these
covariates is
age^1,2#sex
. Abbreviations like this are
described in the documentation for the covariate command.
The comma (,
) and pound (#
) characters are special
abbreviation characters that imply more than one covariate.
You can display the covariates selected with the
covariate
command.
If you make a mistake, enter the command covariate
delete_all
and start over.
solar> covariate age^1,2#sex
solar> covariate
age sex age*sex age^2 age^2*sex
Now we are ready to run the polygenic
-screen
command which does our quantitative genetic
analysis with simple screening:
solar> polygenic -screen
This command will create, maximize, and compare several different models, beginning with a sporadic model, then a polygenic model, then a polygenic model with each covariate suspended (which is the same as being constrained to zero). The covariates not found to be significant will be removed (a very permissive threshold of p < 0.1 is used for this test so as not to exclude any covariates which might be useful). Then the sporadic and polygenic models will be maximized again, and a model with NO covariates will be maximized to determine the variance caused by all remaining covariates. Finally, a screen of information like the following will be displayed.
Pedigree: gaw10.ped Phenotypes: gaw10.phen Trait: q4 Individuals: 1000 H2r is 0.5501787 p = 2.7716681e-29 (Significant) H2r Std. Error: 0.0574114 age p = 0.0085273 (Significant) sex p = 0.2913649 (Not Significant) age*sex p = 0.2032678 (Not Significant) age^2 p = 0.2947688 (Not Significant) age^2*sex p = 0.5929531 (Not Significant) The following covariates were removed from final models: sex age*sex age^2 age^2*sex Proportion of Variance Due to All Final Covariates Is 0.0060284 Output files and models are in directory q4/ Summary results are in q4/polygenic.out Loglikelihoods and chi's are in q4/polygenic.logs.out Best model is named poly and null0 (currently loaded) Final models are named poly, spor, nocovar Constrained covariate models are named noResidual Kurtosis is -0.0363, within normal range
We recommend that polygenic -screen
be run
before doing any linkage analysis. Covariates with very low
significance should probably be removed before performing a
linkage analysis; sometimes having a lot of ineffective
covariates will lead to convergence errors which
prevent some models from maximizing.
You can also run the polygenic command without the
-screen
argument and the covariate
screening will not be performed. You can also use the
-all
argument to keep all covariates in
the model regardless of their significance, or the
-fix
argument to force particular
covariates to be kept.
In any case, the
polygenic
command will produce a model
named null0
(null with zero linkage
elements) which will be required for all linkage analyses.
You can also save a maximized model of your own design as
null0
(in the maximization output directory)
if you want to use something different from what the
polygenic
command creates as the null model
for linkage.
Linkage analyses should be built on top of a good polygenic analysis. This will improve the ability to find marker effects.
But before performing ANY linkage analyses, you also will need to create IBD files (for twopoint linkage analyses) and MIBD files (for multipoint linkage analyses).
Twopoint linkage analysis requires Identity by Descent matrices, which in SOLAR are called IBDs. These must be computed and saved in compressed files for repeated usage. After some or all IBDs have been computed, we can analyze them in a twopoint scan.
The steps in the IBD computation process depend on the nature of the data available. The simplest case applies to the example data, so we'll start with that, and then look at what you would do in other cases. (A more detailed top-down discussion is given in Section 5.2.)
Regardless of what kind of data you have, it is a very good
idea to have a dedicated subdirectory for ibd files, which we
call the ibddir. If that directory doesn't already
exist, it can be created with the Unix
mkdir
command. Then, use the ibddir
command to tell SOLAR that this directory is to be used for
storing and reading IBD files:
solar> mkdir gaw10ibd solar> ibddir gaw10ibd
In the GAW10 simulated data set included with SOLAR, all genotypes of all individuals in the pedigree are known. This is unusual in real data sets, but it makes sense to build a simulation this way. Since marker allele frequencies are only used to impute missing genotypes, frequency data is not required for completely-typed markers.
There is a discussion of the marker file format in the next
chapter, but you can also get a description in SOLAR using the
file-marker
command.
So, the series of commands which works best with the example data is just:
solar> load marker mrk9 solar> ibd
For the example data, should take between 1 and 15 minutes (depending on computer speed). Then, you can proceed to create the IBD files for chromosome 10:
solar> load marker mrk10 solar> ibd
Since computing the IBD files for each marker takes some time,
it is a good idea to put the required into a script to
do every marker unattended. The script
makemibd.tcl
is a simple example of such a
script (and it also makes the mibd files). Note that
Unix commands such as
mkdir
must be preceded by the Tcl command
exec
inside scripts, even though this is
not required when you are typing commands to the
solar>
prompt. There are more useful
pointers about writing SOLAR scripts in Chapter 7.
When the last ibd
command has completed you
will have IBD files and may either do a twopoint model
analysis or proceed to the preparation of multipoint IBD
(MIBD) files. But first, let's consider how you would create
IBD files with a more typical data set.
If marker data for some individuals is not available, but
marker frequency information (independent of the sample data)
is available, it should be written to a frequency data file
(the freq file) and loaded prior to loading the marker data
with the command load freq
.
The freq file is also described in Chapter 4, but you can also
view the requirements for the freq file by entering the SOLAR
command
file-freq.
So, in this situation you would use the following commands (do not try this with the example data!):
load freq freq.dat load marker mrk9 ibd
The ibd
command is the one which actually
creates the IBD files. It will require some time to finish,
depending on the size of your data files and other factors
such as the number and distribution of individuals with
unknown genotypes.
If no marker frequency information is available, you should
use the freq
mle
command to compute maximum likelihood
estimates of frequency after loading the marker data but
before giving the ibd
command
(do not try this with the example data!):
load marker mrk9 freq mle ibd
Computing maximum likelihood estimates of marker frequencies
takes a large number of calculations, so expect the freq
mle
command to take a long time. Once again, this
depends not only on the size of the pedigree and marker data,
but also on other factors such as the number and distribution of
individuals with unknown genotypes.
Before performing a twopoint scan, you should first
construct a suitable null model (called
null0.mod
). This would be done by the
polygenic -screen
command described above
in Section 3.6. The null model was written to a subdirectory
named q4
because the trait was named q4
.
If you have exited SOLAR and re-entered SOLAR, you should give
the trait
command again (or specify the
outdir
again, if you did that), so that
SOLAR can find the null0
model created by
your polygenic analysis. (As long as you stay within the same
working directory, however, you need not specify the
ibddir
again.)
solar> ibddir gaw10ibd solar> trait q4
Now you can perform the twopoint scan by simply giving the
twopoint
command:
solar> twopoint
This will build and test models with every marker found in the
ibddir. The model with the highest LOD (which should be
marker d9g9 with a LOD of 2.3822) will be retained in memory.
The full results will be written to a file named
twopoint.out
in the q4
subdirectory.
Multipoint linkage analysis requires Multipoint Identity by Descent matrices, which in SOLAR are called MIBDs. These are computed and saved in compressed files for repeated usage. Then we can analyze models using them in a Multipoint Scan.
In section 3.7.1, we created a set of IBD files. We now use
those as the foundation for building a set of MIBD files.
MIBD files are saved in a separate directory than the IBD
files, which is specified with the mibddir
command. (If you have changed working directories, you would
need to specify the ibddir
again,
otherwise you need not do so.)
solar> ibddir gaw10ibd solar> mkdir gaw10mibd solar> mibddir gaw10mibd
Next, we need to load the map file, which gives the positions
of each marker on a particular chromosome. The map file is
described in the next chapter. You can also view the map file
requirements by entering the SOLAR command file-map
.
solar> load map map9
We can now give the mibd
command
to actually create the MIBD files. It requires one argument,
the separation between MIBD locations in cM; 1 is typically
used.
solar> mibd 1 ;# "1" means "1 cM"
MIBD calculation will take some time, 5 to 30 minutes for the example.
To compute MIBDs for chromosome 10, repeat the "load map" and "mibd 1" commands.
solar> load map map10 solar> mibd 1
When the MIBDs have been computed, you will be ready to run a
multipoint scan. As with the twopoint scan, you
must have created a null0
model first as
described above in section 3.6. (You need not repeat that if
done already.) If you have exited and re-entered SOLAR since
creating that model, you would need to specify the
trait
(or
outdir
) again; otherwise, it is not
necessary. Then the following commands will start a
multipoint scan:
solar> trait q4 solar> chromosome 9-10 solar> interval 5 solar> finemap 0.5 ;# finemap around all LOD's > 0.5 solar> multipoint 2
The chromosome
command specifies the
chromosomes you want to include in your multipoint scan. If
you had created multipoint files for chromosomes 1-23, you
could give the command chromosome 1-23
.
With this example, however, we have only created MIBD files
for chromosomes 9 and 10. You can also select a single
chromosome, multiple chromsomes (chromosome 2 9
11
) or any number of ranges of chromosomes (
chromosome 1-10 15-20
). Suffixed
chromosomes must be specified individually
(chromosome 2p 2q 13-23
). SOLAR will
silently skip over any gaps in the chromosome set found
in your mibddir
, so you can specify a
larger set than you actually have data for.
The interval
command specifies how far
apart to build multipoint models. If you specified
interval 1
they would be 1 cM apart. In
many cases this wastes a lot of time analyzing models which
have LOD scores close to 0. In the
interval
command, you can also specify a
particular range of locations (interval
5 100-150
).
The finemap
command specifies a finemap
criterion. After the first pass having interval 5, a second
pass is performed including all points surrounding locations
having a LOD score higher than this value. This makes sure
you are not losing any interesting points. (finemap and
interval work together to help you save time yet not lose any
important results.) The finemap command is optional; the
default finemap criterion is 0.588.
If you have not specified mibddir, chromosome, and interval,
you will be prompted to do so by the
multipoint
command. (This cannot be done in scripts, so in scripts you
should always use the
mibddir
, chromosome
,
and interval
commands prior to the
multipoint
command as shown above in scripts.)
The multipoint
command itself does not
require any arguments. If no argument(s) are given, it simply
performs one pass through the selected chromosomes, reporting
all the LOD scores, and saving the linkage model with the
highest LOD score. Often this is all you need or want to do.
When multipoint
is invoked with a
continuation criterion LOD score argument as we have
done above, it attempts to perform oligogenic scanning
by making multiple passes through the selected chromosomes.
If the highest LOD score in any pass exceeds the criterion,
another pass will be performed using all the highest scoring
loci from previous passes as fixed elements. Scanning will
stop when the highest scoring loci in the last pass does not
meet the criterion. The resulting best model will
include all the loci whose inclusion exceeded the criterion
conditioned on the previous loci. (The loci from the last
pass, which must have not met the criterion, will not be
included.) If successful, oligogenic scanning will result in a
sequence of interesting QTL identifications.
More than one criterion can also be specified; each additional criterion applies to the next pass in sequence (and all remaining passes if it is the last criterion). Typically people set a higher LOD criterion for the first QTL than for subsequent ones:
multipoint 3 2
The multipoint results will be displayed on the terminal and
also saved to several files in the maximization output
directory (q4
in this example). The file named
multipoint.out
will include summary
information including the ranges of loci tested and the best
ones. The file named
multipoint1.out
will show the results of
the first pass, multipoint2.out
will show
the results of the second pass, and so on. The models will
include null1.mod
which is the best model
containing one linkage element, null2.mod
,
which is the best model containing two linkage elements, and
so on. The maximization details for these models will be
saved in files null1.out
,
null2.out
, and so on. Each numbered null
model serves as the null for the next higher pass.
To plot the chromosome having the highest LOD score in the previous
multipoint scan, you can use the plot
command.
If you have exited and re-entered solar, you must re-enter the
trait
(or outdir
)
command so that SOLAR can find the multipoint output files.
solar> trait q4 solar> plot
If you have changed working directory, you should also give the
the mibddir
command so that SOLAR
can find the processed map information to include marker
positions on the plot. Otherwise, you can select the
-nomark
option which prevents SOLAR
from looking for the map file. You can also provide a
custom map file for plotting using the
-map
option.
There are many plot options in SOLAR (see the documentation
for the plot
command). SOLAR uses xmgr for standard plotting, and
you can also use the xmgr graphical interface to change
the formatting. You can also copy the plot parameter file
multipoint.gr
to your working directory
and modify it to change the plot format for all multipoint
plots. The default file is in the lib
subdirectory of your SOLAR installation. If you would like to
use the same formatting in several working directories, you
can copy your modified multipoint.gr
file to a
lib
subdirectory of your home directory
and it will apply to all of your SOLAR working directories.
(You can also put your personal Tcl scripts in a
lib
subdirectory to be available when you
are in any working directory.) The
multipoint.gr
file contains documentation
to understand the options available. (Some other types of
plotting have other
.gr
plot parameter files. There is no
.gr
file for some other forms of plotting
which do not use xmgr.)
To specifically plot chromosome 10 in scan (pass) 2 and write a postscript file for it, you could use the command:
solar> plot 10 -pass 2 -write
The postscript file will be written to the current
maximization output directory (q4) and have the name
chr10.pass02.ps
.
To show all the chromosomes on a single page, we prefer a simplified plot format called stringplot (which looks better for many chromosomes than for the two in this example):
solar> plot -string
The stringplot is done using Tk graphics instead of
xmgr. There is no interactive interface (GUI), but there
are some command line options available. For stringplot, you
need to specify color by name (with the
-color
argument) instead of by number.
There are several other customization options available including
-lod -noconv -date -lodmark -lodscale -dash
-linestyle -titlefont
Another way to show all the chromosomes on a single page is with miniature plots. (Warning: This will not work unless you have the free programming language Python installed on your system, but most systems have Python now.)
plot -all plot -all -nodisplay
Often people run SOLAR on a remote machine. To communicate with the running SOLAR program, they use a terminal window to the remote machine which appears on their local window system. Under these circumstances, you will need to take extra steps to do plotting, if you can even plot directly from SOLAR at all. Plotting is inherently graphical, so it won't work if all you have is a text-based terminal. You will not even be able to generate a Postscript (tm) file without an acceptible graphical user interface; this is a limitation of the plotting software SOLAR uses.
SOLAR is intended for use on Unix or linux systems which use the X window system (also simply called X), as nearly all of them do. In any case, having X somewhere on your network is a requirement for plotting. Packages which support X may also be available for "PC" or Mac systems, but we don't know anything about them, so don't ask us, ask your vendor or system administrator.
First you need to decide where to direct the graphical display. If you are lucky enough that your workstation has an X display (if it is a Unix or linux system with a graphical display it almost certainly does) you can simply direct the graphics back to your workstation. Otherwise, you may be able to do plotting by using the graphics display on the machine where SOLAR is actually running and control it from there. Or you may be able to direct the display to some other machine on your network which has an X display.
To tell the machine where SOLAR is actually running to send
graphics to another machine, you need to define or redefine
the DISPLAY
environmental variable. You
must do this at the shell prompt before starting SOLAR. How this is
actually done depends on what shell you are using. If you are using
bash
(the default on most linux systems),
or ksh
, you would give a command like this
(assuming that the machine with the display is called
usedisplay; substitute the applicable name where
usedisplay appears):
export DISPLAY=usedisplay:0.0
For the csh
shell, you would use a command
like this:
setenv DISPLAY usedisplay:0.0
These commands can be put into a startup file in your home directory
on the remote machine. Such a startup file might be named
.bash_profile
, .kshrc
, or
.cshrc
, for bash, ksh, and csh respectively.
(To see these filenames beginning with ".", you will need to use
the command ls -a
.)
The second part of redirecting the graphics display from one machine to another is opening up the display permission on the machine having the X display. Open a terminal window on the X display machine, and enter the following command (assuming that the machine where SOLAR is running is called remote; substitute the applicable name):
xhost + remote
If you don't know what the name of a machine is, you can use the
command uname -n
on that machine to find out.
You have now seen an overview of things SOLAR can do with only a few commands. There are many more commands in SOLAR and many more things that can be done.
Additional discussion of the SOLAR analysis commands
may be found in
Chapter 6: Basic Modeling Commands.
This mostly covers the same material as this tutorial, such
as polygenic, twopoint, and multipoint analysis. It also
gives an overview of the use of the
bayesavg
command for Bayesian Model
Averaging. Finally, it discusses techniques for achieving
convergence in difficult cases.
Chapter 9: Advanced Modeling
Topics discusses Discrete Traits, Bivariate Analysis,
Household Groups Analysis, Dominance Analysis, and Custom
Parameterization. This chapter also gets far deeper into the
SOLAR mu
and omega
commands which are the mechanism underneath all SOLAR models.