Re: {MPML} Monte Carlo uncertainty estimation

Steve Chesley
 

Bill,

There are roughly five means of mapping an orbital solution with uncertainty to a given time, each with a fairly well defined realm of utility.

1) Linear covariance mapping. Take the covariance that you get (or should have gotten!) from the least squares orbit solution and map this to the time and reference frame of your choice via the state transition matrix, which is usually obtained by integrating the variational equations but can also be computed with finite differences. This is not for the faint of heart when it comes to impenetrable jargon, but is really not as complicated as it might sound.

Linear methods are appropriate when the uncertainty is "small." Typically, when uncertainties become on the order of a degree on the sky or become a substantial fraction of an AU in space this method falls apart. This failure can usually be traced to a weak orbit, a long propagation, a very deep close approach, or some combination of these. Multi-opposition and radar-astrometry orbits are almost always amenable to this approach.

2) Orbital Monte Carlo. When the orbit itself is fairly good, meaning that the uncertainty region is small enough that it really looks like an ellipsoid (not a banananoid) in orbital element space, but the propagation is substantially nonlinear then Monte Carlo (MC) sampling in element space is the way to go. This amounts to adding noise that is consistent with the covariance matrix to the nominal orbital elements. Do this many times and you get an ellipsoidal cloud of points in space around the time of the observations. Andrea Milani likes to call these "Virtual Asteroids" because the real asteroid could be represented by any of the Monte Carlo samples. Now if you propagate all of the MC points to the time of interest the cloud will eventually deform to look like a banana after many revolutions or like a corkscrew if there is a close planetary encounter. Any nonlinearities stemming from the propagation (close encounters, Keplerian shear, etc.) will be handled properly by this method. This method is very simple to implement, but requires a good deal of computer horsepower relative to the linear method.

For NEAs orbital MC sampling is usually appropriate when the observed arc is a few weeks or more.

3) Observational Monte Carlo. When the observed arc is very short the uncertainty in the original orbit determination is so large that you cannot assume that the ellipsoid represented by the covariance adequately represents the true uncertainty. Monte Carlo sampling can still be done, but this time noise must be added to each of the observations and a new orbit computed based upon the revised observations. This new orbit becomes a Virtual Asteroid as above. The drawback is that you have to solve the least squares problem for every single MC orbit with this approach, which can be time consuming, and so the MC sampling process is far slower. This can be a big deal if you are running many millions of orbits, but in practice the time spent propagating each sample is long relative to the MC sampling time, and so the propagation time is what limits your ability to take a lot of samples.

This method is what Bill describes and is, I understand, already a part of John Rogers' CAA software. It is generally suitable for NEAs with at least a few days of observations if the least squares problem is convergent. If there are distant alternate solutions, as often happens for short-arc objects discovered near-sun, this method is _unlikely_ to reveal those alternate solutions.

4) Statistical Ranging. This is really the only reliable means of computing orbits with very short arcs, ranging from a few minutes to a few days. This approach is also Monte Carlo in style, but it randomly samples two observations from the available set and selects two random topocentric distances at the observation times. From two obs and two distances you get an orbit, and that's your Virtual Asteroid. There are a host of variations on this method: You can also add noise to your sampled observations if you like. Dave Tholen and Rob Whiteley, working independently from Virtanen et al., have implemented a method that fits an orbit to all the available observations with the topocentric distance constraints applied. Or something like that.

Statistical ranging _will_ reveal alternate solutions and will give robust uncertainty regions, which in some cases can be really wild looking.

5) Multiple Solutions. This is the method popularized by, if not invented by, Andrea Milani. It maps the spine of the elongated uncertainty region at epoch, and so it is a one-dimensional sampling, which substantially cuts the CPU requirements. But it is perhaps the most complicated of the methods, and I'm starting to realize this message is going far too long, and so I'll only say that this method is at the core of both of the automatic impact monitoring systems currently in operation (Sentry & NEODyS). Objectively this approach has a pretty limited utility due to its complexity.

Each of the above methods has a fairly specific region where it is the most appropriate, but there is still a good amount of overlap. Methods 1-4 can be viewed as providing increasing power at the expense of simplicity and speed. Using statistical ranging to compute uncertainties on multi-opposition orbits would be crazy, not unlike using a sledge hammer to drive a brad. Elegance requires that you use the most simple method that is suitable, but, frankly, method 3) will work reasonably well for virtually all cases. And, yes, Bill, it's really that simple!

6) Did I say five? Well, there is also the semi-linear method. But you definitely don't want to go there. It's more complicated than multiple solutions! I'm just adding this to keep out of trouble with Andrea Milani. ;-)

-Steve Chesley
--
Navigation & Mission Design Section, MS 301-150
Jet Propulsion Laboratory
Pasadena, California 91109
(818) 354-9615, Fax: 393-6388


E. L. G. Bowell wrote:

Bill:
You are essentially describing a technique that has already been published under the name of statistical ranging. The primary reference is Virtanen et al. (Icarus 154, 412, 2001), and there is additional work in Muinonen et al. (Celest. Mech. and Dyn. Astron. 81, 93, 2001). There is an application to TNOs by Virtanen et al. (Icarus, in press), and a URL on same at http://asteroid.lowell.edu/cgi-bin/virtanen/tnoeph. There will also be a description of statistical ranging and other recent orbit methods in Bowell et al. (in Asteroids III, U. Arizona Press, 2003).
Cheers...Ted


Hi folks,

I'm pondering adding a Monte Carlo routine to my orbit determination
code, for uncertainty determination. If I understand it properly,
this is a very straightforward process:

(1) Add some Gaussian noise to your original observations, in both
RA and dec (and perhaps in time, as well... important for VFMOs.)
The amount of the noise should reflect the assumed uncertainties in
the observations.

(2) Solve for the resulting "fuzzified" orbit.

(3) Repeat until you've got a few zillion almost, but not quite,
identical orbits. This may range from a "go away for a cup of
coffee" to a "let the process run in the background for a few
days" kind of job.

(4) To illustrate the uncertainty area for a given date, just
show your zillion or so simulated objects. They'll appear as a
"cloud", and if you've got enough of them, their density will make
the likely placement of the target object apparent.

Is it really that easy, or am I missing something?

(If it _is_ this easy, I'm gonna be kicking myself for not having
done it a long time ago.)

-- Bill

Join mpml@groups.io to automatically receive all group messages.