SOLAR supports discrete traits with two possible
values. These may be coded as any two consecutive integers
such as 0/1
or 1/2
.
Missing data values must be blank (not some other
number). The coding of blanks is described in section
4.1.1.
The presence of more than two consecutive integers will force SOLAR to handle a phenotypic variable as quantitative. SOLAR cannot handle discrete traits with more than two classes. If it is necessary to handle a trait with more than two classes, some sort of reformulation of the problem is necessary.
Beyond the coding of the trait variable, SOLAR does not require you to do anything special or different for discrete trait models. You can use all the same commands as with quantitative trait models.
SOLAR uses a liability threshold model to handle discrete traits. This is implemented in separate maximization routines which are mostly identical to those used for quantitative trait maximization. In the maximization output files (such as poly.out created by the polygenic command) there will be a message "Using SOLAR Discrete Trait Modeling". Also, for discrete traits only, the polygenic command will compute the Kullback-Leibner R Squared when there are covariates, instead of computing the variance due to all covariates.
Although support for discrete traits is provided, and we keep on improving it, we recommend the use of quantitative traits whenever possible. Quantitative traits have far more information than discrete traits, and quantitatitive trait models maximize more quickly and reliably. Although recent improvements have improved the convergence of discrete models, there are still some cases where obviously incorrect results, such as heritabilities of exactly 1, are returned. If you get a heritability of exactly 1, be very suspicious that it is incorrect. (Another case which can lead to erroneously high heritabilities is when there are a lot of monozygous twins in the sample. Monozygous twins often lead to singularities in the covariance matrix.)
Sometimes people who are having difficulty getting convergence for their discrete trait models try analyzing the discrete trait as if it were quantitative. This can be done by giving the command:
option EnableDiscrete 0
prior to running
polygenic
the first time (if you have
previously analyzed a discrete trait model, be sure to use
command model new
first to erase the
parameters and constraints associated with the discrete trait
model). Sometimes this will provide better convergence than
when the trait was analyzed as discrete. Beware that there is
at least one fundamental error when analyzing a discrete trait
as quantitative. There are not truly two degrees of freedom
for the
Mean
and SD
parameters. Therefore, and still owing to the lack of
information provided by the discrete variable,
convergence failure is still more common than with true
quantitative trait models.
Another discussion regarding the use of discrete traits is printed by the command discrete-notes.
The detection of binary discrete values is the same for covariates as it is for traits. Any pair of sequential integers will work. Beware that if there are more than two sequential integers, SOLAR will simply handle them as quantitative. (And do not code missing values as zero! All missing phenotypes should be left blank.).
Advice as to what to do when you have 3 or more classes is given in the next section.
For covariates there is only a small difference in maximization processing between the discrete covariates and quantitative covariates. In both cases, the beta parameter is multiplied by an adjusted phenotypic value in the effective mu of the model. The difference concerns only the adjustment -- specifically what value is subtracted from each observation. For quantitative covariates, the sample mean is subtracted from each observation. (Thus, they are mean adjusted.) For discrete covariates, the adjustment is the lower of the two values (so that the two values which can be multiplied by the beta parameter are 0 and 1). In this case the phenotypic variable acts as a switch to turn the beta coefficient on or off in the mu. The beta parameter then becomes an estimate of the mean displacement caused when the variable has the higher value.
Prior to maximization, SOLAR cannot tell which variables are
discrete, so the adjustment term for all variables is shown in
the effective mu as the variable mean value, signified by the
prefix x_
. (For example, if a discrete
covariate is d
, the adjustment value is
shown in the effective mu as x_d
.) During
maximization, SOLAR determines which variables are discrete,
and will actually use the variable minimum value as adjustment
base for discrete variables. After maximization, the actual
adjustment used (shortened to a few significant digits for
display purposes) will be shown in the effective mu, or
dropped if the variable was coded as 0/1. For example, if the
discrete covariate d is coded as 1/2 the term will be
bd*(d-1)
, but if the covariate is coded as
0/1 the term will be
bd*d
.
When your discrete covariates can have more than 2 states, SOLAR cannot automatically handle them correctly. You will need to either decompose them into a series of separate binary covariates, or use a Household Group Analysis, which is discussed in a later section. (Note that the Household Group Analysis performed by SOLAR is very general and need not refer strictly to Households only.)
Decomposing categorical covarites into binary covariates is best and most easily done when there is a small number of classes. The correct way to do this is let the overall mean apply to the most common class. Then, for N classes, use N-1 dummy variables corresponding to the less common classes. Each of those dummy variables could be coded as 0/1, with 1 signifying that the class named by the variable is applicable. (Actually, as for all discrete covariates, using any pair of consecutive integers will work; SOLAR will use automatically use the smaller one as a subtrahend.)
Beginning with SOLAR version 4, you can maximize
multivariate models having multiple traits. You simply
specify multiple traits instead of one (using the
trait
command) and SOLAR takes care of the
remaining details automatically. You can use the same
commands that you would use for univariate models
(sometimes with certain options disallowed). If you had
previously analyzed a univariate model, it is necessary to
clear that out with the
model new
command. For example:
solar> model new solar> trait q1 q2 d3 solar> covariate age sex solar> polygenic
Multivariate models have variance parameters and beta parameters
for each trait. As with univariate models, the beta
parameters are set up automatically by the
covariate
command, and the variance
parameters are set up automatically by the
polygenic
command. A parenthesized suffix
for each parameter indicates the trait to which it applies.
For example, the parameter e2 for trait q1 will be indicated
as e2(q1)
. It is also possible to use
parenthesized suffixes to specify the use of covariates for
one trait but not the other in the covariate command.
Multivariate models also have
rhoe
and rhog
parameters and bivariate linkage models also have a
rhoq1
parameter. These parameters
correspond to the environmental, genetic, and linkage
correlations between the traits.
SOLAR will make the maximum usage out of unbalanced
data sets in which there are some individuals having only one
of the traits. By default, such individuals will
be included in the analysis sample, as will be shown in
the output of the polygenic
command.
(Note: this does not apply to covariate variables.
Individuals missing any covariate variable will not be
included in the sample.) If you would prefer that only
individuals having all traits be included in the sample, you
can set the
UnbalancedTraits
option to zero before
giving the polygenic
command:
option UnbalancedTraits 0
The -screen
option for the polygenic command
is not available for multivariate models. You are expected to do
your covariate screening in univariate models first.
The RhoE
and RhoG
values are computed and reported along with their estimated
standard errors. In addition, you can specify
-testrhoe
and
-testrhog
options. The
-testrhoe
option tests the significance of the
rhoe difference from zero. The -testrhog
option
tests the significance of the rhog parameter from zero and also
from either either 1 or
-1 (depending on whether rhog is negative or positive,
and not exactly 1 or -1 already). The latter
test is a test for pleiotropy.
Covariates can be specified as unqualified (applicable to all traits) or as qualified (applicable only to a particular named trait). The qualifier is a parenthesized suffix. For example, if age is to be a covariate for trait q1 only, you could specify the covariate like this:
solar> covariate delete_all solar> covariate age(q1)You can also specify "null" covariates (by following them with a pair of empty parentheses) which do not apply to any trait, but require the variable in the sample just like real covariates:
solar> covariate sex age*sex(q2) ef()
In the above command, covariate sex will apply to all traits, age*sex will apply to trait q2, and variable ef is required in the sample but not otherwise used.
The terms in the default mu will reflect the traits and the currently applicable covariates. As shown below, the default mu is enclosed in backslashed curly braces \{ and \}. This portion of the mu is maintained automatically by SOLAR to correspond to your trait and covariate selections.
solar> model new solar> trait q1 q2 solar> mu mu = \{t1*(<Mean(q1)>) + t2*(<Mean(q2)>)\} solar> covariate sex solar> mu mu = \{t1*(<Mean(q1)>+<bsex(q1)>*Female) + \ t2*(<Mean(q2)>+<bsex(q2)>*Female)\}
Notice that a bivariate mu is divided into two large terms, one multiplied by t1 and the other multiplied by t2. The t1 variable is 1 if the first trait is being estimated and the t2 variable is 1 if the second trait is being estimated. You may use the t1 and t2 variables also if you extend or replace the default mu.
Parameter names including parentheses or other mathematical characters must be enclosed in angle brackets. This becomes important for bivariate Mu because all bivariate parameters have a parenthesized suffix specifying the trait to which they apply.
If you are adding terms in addition to those in the
default mu, you need not type in the default portion, and you
should not type it in outside the curly bracket delimiters.
Either use the mu = mu + <new>
syntax or include some portion of the default mu in
backslashed curly braces (it is not necessary to include the
entire thing, in fact, it doesn't actually matter what you
include inside the delimiters, but there has to be at least
one term for parsing):
solar> mu = mu + 0.01 * sqrt (age-x_age) solar> mu mu = \{t1*(<Mean(q1)>+<bsex(q1)>*Female) + \ t2*(<Mean(q2)>+<bsex(q2)>*Female)\} + \ 0.01*sqrt(age-x_age) solar> mu = \{junk\} + 0.02 * sqrt(age-x_age)*Female solar> mu mu = \{t1*(<Mean(q1)>+<bsex(q1)>*Female) + \ t2*(<Mean(q2)>+<bsex(q2)>*Female)\}+ \ 0.1*sqrt(age-x_age) + 0.2*sqrt(age-x_age)*Female
You can also replace the mu by typing in a new mu equation without a part enclosed in backslashed curly braces.
solar> mu = <Mean(q1)>*(t1*<bsex(q1)>+t2*<bsex(q2)>)
When the mu has no default portion, it will not be changed
if you attempt to add covariates with the
covariate
command. So it becomes your
responsibility to add covariate terms, if required, to the mu.
Any phenotypic variables in the mu will be required in the
analysis sample, just as if you had used the covariate
command.
It should be remembered that the default mu is displayed for illustration purposes only. By default, an internal optimized version is evaluated which should be mathematically identical to what is shown, but might be evaluated in a different order. Re-entering a default mu as a replacement mu (with no curly brace delimited part) may result in small changes in parameter estimates due to numerical limitations.
The Omega for bivariate models is quite a bit more complicated than that for univariate models, but shows underlying similarity. It requires four new pseudo-variables: ti, tj, teq, and tne:
ti select trait for individual i tj select trait for individual j teq 1 if same trait, 0 if different tne 0 if same trait, 1 if different
There are some other differences from the univariate omega.
pvar
is replaced by the product of the two
standard deviation terms (for the trait of each individual),
and other variance components are replaced by the product of
the square roots of the variance components for the trait of
each individual. Then all parameters with parenthetical
suffixes must be enclosed in angle brackets so they are not
interpreted as function calls. So for a bivariate polygenic
model, we end up with:
omega = <sd(ti)>*<sd(tj)>* \ ( I*sqrt(<e2(ti)>)*sqrt(<e2(tj)>)*(tne*rhoe+teq) + \ sqrt(<h2r(ti)>)*sqrt(<h2r(tj)>)*phi2*(tne*rhog+teq) )
A multivariate omega for more than 2 traits is similar, but
extends this further with
rho parameters that are specific to each pair of traits.
To refer to these parameters, pseudo-variables such as
rhog_ij
are created which refer to the pair of traits being
tested. For example, if traits 1 and 3 are being tested,
rhog_ij
refers to the actual
rhog_13
parameter.
omega = <sd(ti)>*<sd(tj)>* \ ( I*sqrt(<e2(ti)>)*sqrt(<e2(tj)>)* (tne*rhoe_ij+teq) + \ phi2*sqrt(<h2r(ti)>)* sqrt(<h2r(tj)>)*(tne*rhog_ij+teq) )
Linkage analysis with multiple traits is done the same way as with one trait. The linkmod, twopoint, and multipoint commands are available. Household effects may be included (as described in the next section).
Household Group Analysis in SOLAR is an option which may be
applied to any sporadic, polygenic or linkage analysis. An
additional variance component parameter is added to account
for any shared environmental effect. (It doesn't have to be a
household effect per se, it could be related to the
sharing of any non-genetic factor between individuals.)
In order to add household effects, your pedigree file
must contain a household identifier for each individual
when the pedigree is loaded. The expected fieldname for this
is
When the "load pedigree" command is
performed, a matrix file named house.gz is created based on
the current
Just before giving the
One feature of a household analysis is that individuals not in
same pedigree may still be in the same environmental group.
Because of this, the normal way SOLAR divides the work up into
pedigrees is not applicable. Pedigrees and household groups
have to be merged into larger groups that contain every
individual in the same pedigree as well as every individual
that has the same HHID as any individual in the pedigree.
These groups are called Pedigree-Household Groups. This is
all done automatically, and you don't have to think about it
unless there is an error. It is controlled by the option
We don't believe that dominance components are useful in
quantitative trait mapping, particularly among humans, since
only bilineal relatives contribute to this component.
Nevertheless, SOLAR can handle dominance analysis in
polygenic, twopoint, and multipoint linkage models following
procedures explained in this section. For multipoint and some
twopoint cases, however, you may need to import MIBD and/or
IBD matrices from some other genetics package. Actually, we
recommend importing IBD and MIBD matrices in all dominance
analysis cases because the exact numbers are especially
important in dominance analysis. See section 5.5 for a discussion on
importing IBD and MIBD matrices from other packages into SOLAR.
A
Linkage analysis requires
9.3 Household Group Analysis
hhid
. If you cannot conveniently
change your pedigree to use hhid
for the
household identifier, you can use the field command to substitute
your field name. This is also useful for using another
required field as household identifier. For example,
sometimes mother is used as a group identifier.
solar> field hhid mo
solar> load pedigree gaw10.ped
solar> load phenotypes gaw10.phen
hhid
field. If two individuals have the same
hhid
, their house matrix value is one,
otherwise it is zero. That matrix is in the same format as
the other SOLAR matrix files such as those for IBD and MIBD
files.
polygenic
command to
perform a polygenic analysis, you give the house
command to activate the household effect. This creates a
parameter named C2
which corresponds to
the fraction of the variance associated with the effect of a
common environment. The polygenic and multipoint commands
will detect and use the c2
parameter
appropriately. The house
command will
also set up your omega with the required
house*c2
term. To remove the household
stuff from a model, use the house -delete
command.
solar> model new
solar> trait q1
solar> house
solar> polygenic
MergeHousePeds
which defaults to 1. The
merging can be disabled by setting this to zero, but then only
the intra-pedigree household effect will be detected.
Sometimes it is better simply to include all the individuals
in the pedigree file into one big group. This can be done by
setting the MergeAllPeds
option to 1. (It
defaults to zero.)
9.4 Dominance Analysis
delta7
maxtrix is required for polygenic
dominance models. For quantitative models, SOLAR
normally computes
delta7
on-the-fly along with
phi2
. These coefficients can also be
loaded from a phi2.gz
matrix file if you
explicitly load that matrix (which can be done with the
loadkin
command) before running
polygenic
. delta7
is the second column in the
phi2.gz
matrix file.
d7
(which is the
IBD analogue of delta7
). SOLAR
computes d7
as the second column in IBD
matrix files only when the Curtis and Sham
method is used, and this method is used only when these
conditions are met:
There is NO inbreeding. |
There is no more than one marriage loop. |
There is at least one untyped individual. |
Otherwise, the Monte Carlo method is used, and
d7
is not computed by SOLAR. SOLAR
never computes d7
for MIBD files. For
historical reasons, the second column in MIBD files is simply
a copy of phi2
, but this is an obsolescent
feature and may be changed at any time. When MIBD
files are imported from some other package, the second column
will be filled with d7
if that package
computes it. There is more discussion of SOLAR matrix files
in sections 8.3, 8.5, and 8.6.
You will not be able to use the usual commands such as
polygenic
with a dominance model, because they
don't currently support the d2r
variance
parameter. But you can do the polygenic analysis (without
dominance) first, and then add the dominance terms, and then
compare the models with dominance and without dominance.
You will need to add a d2r
parameter and a
delta7*d2r
term to the omega and add
d2r
to the constraint of variance
components. In order to start a new variance parameter at
0.01, it will be necessary to subtract (or carve) 0.01
from some other variance parameter.
solar> outdir q1dom solar> model new solar> trait q1 solar> covar sex solar> polygenic [...] solar> parameter d2r = 0.01 lower 0 upper 1 solar> parameter e2 = [expr [parameter e2 =] - 0.01] solar> omega omega = pvar*(I*e2 + phi2*h2r) solar> omega = pvar*(I*e2 + phi2*h2r + delta7*d2r) solar> constraint e2 + h2r + d2r = 1 solar> maximize solar> save model q1dom/null0
To extend this to twopoint linkage models, you will need to fully
construct a single prototype linkage model first
because twopoint
can't construct
the dominance parameterization. But once you have
constructed a prototype model, you can invoke
twopoint
with the Custom
Parameterization
option -cparm {}
. With this option, all
twopoint needs to do is subsitute one matrix with another.
It doesn't need to be able to construct a linkage model from
the polygenic model, so the prototype linkage model can
have any parameters you want. (multipoint
also allows a -cparm {}
option, but
multipoint
is not applicable to dominance
models, as discussed above.) Following the
-cparm
argument, there must be a list of
parameters you would like to print out. It could be an empty
list, but in this case we have chosen to print out
h2r, d2r, h2q1
, and
d2q1
. In the example below, we are also
using the -saveall
option to save all
twopoint models for later examination.
As shown below, we set up parameters h2q1
and d2q1
for the marker specific additive
and dominant linkages. Those parameters are added to the
constraint of all variance parameters that add up to one.
In order to give these new parameters a starting value of
0.01, and still maintain the constraint, 0.02 needs to be
subtracted from (or carved out of) other variance
parameter(s). Also, new terms needed to be added to the omega
for the new parameters.
solar> parameter h2q1 = 0.01 lower 0 upper 1 solar> parameter d2q1 = 0.01 lower 0 upper 1 solar> parameter h2r = [expr [parameter h2r =] - 0.01] solar> parameter d2r = [expr [parameter d2r =] - 0.01] solar> matrix load gaw10ibd/ibd.d9g1.gz ibd d7 solar> omega = pvar*(I*e2 + phi2*h2r + delta7*d2r + ibd*h2q1 + d7*d2q1) solar> constraint e2 + h2r + d2r + h2q1 + d2q1 = 1 solar> maximize solar> ibddir gaw10ibd solar> twopoint -ov -cparm {h2r d2r h2q1 d2q1} -saveall
The same procedure could be used for
multipoint
, which also supports the
-cparm
option. (For
multipoint
, you would also need to give
the required mibddir
,
chromosome
, and
interval
commands.) To follow our
conventions exactly, you would also identify the MIBD matrix
as mibd1
instead of
ibd
. (Though, when using the
-cparm
option, it doesn't matter whether
you follow our conventions or not.)
It is now possible for
SOLAR to maximize models with a custom parameterization that
is fundamentally different from our standard parameterization.
It is no longer necessary for models to use the standard
e2
, h2r
, and
h2q1
variance component parameters, nor is
it necessary for models to have mean
and
sd
parameters. It is only necessary that
there be mu and omega equations, and
that all the terms in those equations be either parameters,
matrices, data variables, or pseudo-variables defined for
those equations (see their documentation for details). If
there is parameter named Mean
(or
Mean(<trait>)
for
bivariate) it will be initialized and used appropriately.
Likewise for a parameter named sd
. But if these
parameters are not used in the mu
and
omega
, they do not even have to be present
in the model.
(Note: pvar
in the standard univariate
mu
is
simply the square of parameter sd
, so if
pvar
is used, there must be an
sd
parameter.)
If you wish to use covariates in the usual way, you should
include a Mean
parameter, and the usual default
mu
will be maintained automatically. If
you don't use a Mean
parameter, you will need
to replace the default mu
with some other
expression to calculate the expected trait value based on
covariates. The mu could be a simple as a constant number:
mu = 1.56
or as complicated as any expression you can write, including
many parameters, constants, phenotypic variables for the
individual, pseudo-variables (such as
x_var
, min_var
, and
max_var
), any math operators and
mathematical functions defined in C, and the
^
operator as a shorthand for
exponentiation. For details, see the documentation for the mu
command. You can display the effective
mu
that SOLAR generates for any model simply
by setting up that model with trait
and
covariate
commands, and then
entering the command word mu
by itself.
The effective mu
is also written to all
model files (as a comment only...the mu
documentation explains why).
Unfortunately, while the maximize
command
now supports arbitrary parameterization, some of the standard
scripts, such as polygenic
do not. But
this should not be a limitation for a knowledgeable SOLAR
user, or perhaps one who has thoroughly read the tutorial in
Chapter 3. The
polygenic
command basically maximizes a
sporadic model and then maximizes a polygenic model and then
compares the difference in loglikelihoods. You can perform
those steps with a few commands, either interactively or in a
script. The main reason why the polygenic
has
gotten so complicated is that it needs to deal with many
different options and all their combinations. For dealing
with only the one situation you are interested in, it is
usually quite simple to create polygenic and/or sporadic
models.
If all you are interested in is a null model for linkage
testing, you can, in most cases, simply set up the required
polygenic model for your parameterization and maximize it.
You don't necessarily need to do what the
polygenic
command does: start with a
sporadic model, maximize it, then change the model to
polygenic, then maximize that. Such a four step process might
help if it is extremely difficult to maximize the polygenic
model. But usually it is not a problem to jump right in to
the polygenic model, which is the approach taken for
simplicity in the next section.
Both the twopoint
and
multipoint
commands do support custom parameterization to a limited
degree through the
-cparm
option. Whenever you use the
-cparm
option, it is assumed that your
model does not use the standard parameters. Therefore, the
commands will not look at or adjust parameters, but instead
they will simply replace one ibd or mibd matrix with the next.
An alternative approach is to write your own linkage model
constructor and use the -link
option for
multipoint. That approach is discussed in the next session.
Unfortunately, SOLAR will not be able to use its normal boundary crunching and moving heuristics with arbitrary parameters. It will be up to you to examine maximized models to be sure that no parameters have ended up on a boundary (suggesting the boundary needs to be moved). It will also be up to you to set all boundaries to reasonable values in the first place, then then crunch them around the best values when there are convergence problems.
One custom parameterization which is very popular now has some
limited SOLAR automation. That is the parameterization which
uses parameters esd
(environmental
standard deviation), gsd
(genetic
standard deviation), and qsd
(QTL-specific standard deviation). Note that these
automation features are still considered experimental.
They don't work very well yet in many cases, probably because
we need to make more adjustments to them.
Starting with the clean slate provided by a model new
command, you can construct a polygenic model with esd
and
gsd
parameters using the
polygsd
command. Note that this is simply
a polygenic model constructor which
constructs the required gsd and esd parameters and gives them
reasonable starting values. It does not maximize the model.
You should do that as the next step. Here is an example of
how you might create a polygenic model using these parameters:
solar> model new solar> trait q4 solar> covar age sex solar> polygsd solar> maximize solar> gsd2h2r
The last command shown, gsd2h2r
, returns
the equivalent polygenic heritability (h2r
)
for a polygenic model after it has been maximized. (It is not
valid with linkage models.)
Your next step might be to perform a twopoint or multipoint
linkage scan. But for these models, the standard linkage model
construction method used by twopoint
and
multipoint
will not work, because it
assumes the presence of our standard parameters.
The best approach we know of now for these models is to use one of
two custom linkage model constructors we have created
for this parameterization. You can specify any linkage model
constructor (including one you have written yourself as a tcl
script) using the -link
option of multipoint
.
For the esd
, gsd
,
qsd
parameterization, we have provided
linkqsd
and
linkqsd0
.
Both of these do the same thing if invoked upon a null model
They create a
qsd
parameter, if it does not already
exist, and initialize it to a small starting value. Where they differ
is what they do if invoked upon the linkage model for the
previous locus. In this case, linkqsd
will simply exchange the old matrix for the new one, without
adjusting any parameter values. But instead
linkqsd0
will start over from the null
model.
With our standard parameterization, we have generally
found that it works best to take the approach used in
linkqsd
, that is, to carry over the
parameter values used in the preceding linkage model, since
there ought to be some correlation between the parameter
values at one locus and the next (except at the start of each
chromosome, where we start over again from the null model).
Only if there is a convergence failure do twopoint and
multipoint try starting the parameter values at the null model
values (as one of several possible retry attempts). But in
our testing so far, the
linkqsd0
constructor seems to work better
for models with the esd
,
gsd
, and qsd
parameterization.
Here is an example (following on from the previous example above):
solar> save model q4/null0 solar> chromosome 9 solar> interval 5 solar> mibddir gaw10mibd solar> multipoint -ov -link linkqsd0 -cparm {esd gsd qsd}
(If your custom parameterization is different, you may wish to
write your own linkage model constructor following the
example of linkqsd
and
linkqsd0
, which you can display with the
showproc
command.
With some custom parameterizations, you might be able to get
away with not having any linkage model constructor.
If you specify the -cparm
option for
multipoint
or twopoint
without the -link
option, the progression
through loci is done simply by substituting one matrix for
another. In this case, it is necessary to start the process
rolling by setting up a linkage linkage model (for which the
actual locus in your starting model is unimportant) before
running multipoint
. This could be done
as in the following example (which is intended to follow on
after the above examples (though, it could also be in a new
session, if you interrupted the above):
solar> load model q4/null0 solar> linkqsd gaw10mibd/mibd.9.0.gz solar> chromosome 9 solar> interval 5 solar> multipoint -ov -cparm {esd gsd qsd}
Technically this does what it is supposed to, but there
are nothing but convergence failures after the first point,
which is, not surprisingly, the same as when the
-link linkqsd
option is used. Also there
might be some problem even at the beginning of each
chromosome, since it must simply start from the parameter
values from the end of the last chromosome.
Note that in all the cases above, it would be invalid to
specify a LOD criterion for multiple passes.
Oligogenic scanning is not yet supported for any of
these custom parameterization options. However, after doing
one scan, you can manually create a prototype linkage
model with one more element, and scan again with the
-cparm
option. The ibd or mibd matrix
having the highest index will be the one that is successively
replaced for scanning.