Re: {MPML} Monte Carlo uncertainty estimation
Steve Chesley
Bill,
toggle quoted messageShow quoted text
There are roughly five means of mapping an orbital solution with uncertainty to a given time, each with a fairly well defined realm of utility. 1) Linear covariance mapping. Take the covariance that you get (or should have gotten!) from the least squares orbit solution and map this to the time and reference frame of your choice via the state transition matrix, which is usually obtained by integrating the variational equations but can also be computed with finite differences. This is not for the faint of heart when it comes to impenetrable jargon, but is really not as complicated as it might sound. Linear methods are appropriate when the uncertainty is "small." Typically, when uncertainties become on the order of a degree on the sky or become a substantial fraction of an AU in space this method falls apart. This failure can usually be traced to a weak orbit, a long propagation, a very deep close approach, or some combination of these. Multiopposition and radarastrometry orbits are almost always amenable to this approach. 2) Orbital Monte Carlo. When the orbit itself is fairly good, meaning that the uncertainty region is small enough that it really looks like an ellipsoid (not a banananoid) in orbital element space, but the propagation is substantially nonlinear then Monte Carlo (MC) sampling in element space is the way to go. This amounts to adding noise that is consistent with the covariance matrix to the nominal orbital elements. Do this many times and you get an ellipsoidal cloud of points in space around the time of the observations. Andrea Milani likes to call these "Virtual Asteroids" because the real asteroid could be represented by any of the Monte Carlo samples. Now if you propagate all of the MC points to the time of interest the cloud will eventually deform to look like a banana after many revolutions or like a corkscrew if there is a close planetary encounter. Any nonlinearities stemming from the propagation (close encounters, Keplerian shear, etc.) will be handled properly by this method. This method is very simple to implement, but requires a good deal of computer horsepower relative to the linear method. For NEAs orbital MC sampling is usually appropriate when the observed arc is a few weeks or more. 3) Observational Monte Carlo. When the observed arc is very short the uncertainty in the original orbit determination is so large that you cannot assume that the ellipsoid represented by the covariance adequately represents the true uncertainty. Monte Carlo sampling can still be done, but this time noise must be added to each of the observations and a new orbit computed based upon the revised observations. This new orbit becomes a Virtual Asteroid as above. The drawback is that you have to solve the least squares problem for every single MC orbit with this approach, which can be time consuming, and so the MC sampling process is far slower. This can be a big deal if you are running many millions of orbits, but in practice the time spent propagating each sample is long relative to the MC sampling time, and so the propagation time is what limits your ability to take a lot of samples. This method is what Bill describes and is, I understand, already a part of John Rogers' CAA software. It is generally suitable for NEAs with at least a few days of observations if the least squares problem is convergent. If there are distant alternate solutions, as often happens for shortarc objects discovered nearsun, this method is _unlikely_ to reveal those alternate solutions. 4) Statistical Ranging. This is really the only reliable means of computing orbits with very short arcs, ranging from a few minutes to a few days. This approach is also Monte Carlo in style, but it randomly samples two observations from the available set and selects two random topocentric distances at the observation times. From two obs and two distances you get an orbit, and that's your Virtual Asteroid. There are a host of variations on this method: You can also add noise to your sampled observations if you like. Dave Tholen and Rob Whiteley, working independently from Virtanen et al., have implemented a method that fits an orbit to all the available observations with the topocentric distance constraints applied. Or something like that. Statistical ranging _will_ reveal alternate solutions and will give robust uncertainty regions, which in some cases can be really wild looking. 5) Multiple Solutions. This is the method popularized by, if not invented by, Andrea Milani. It maps the spine of the elongated uncertainty region at epoch, and so it is a onedimensional sampling, which substantially cuts the CPU requirements. But it is perhaps the most complicated of the methods, and I'm starting to realize this message is going far too long, and so I'll only say that this method is at the core of both of the automatic impact monitoring systems currently in operation (Sentry & NEODyS). Objectively this approach has a pretty limited utility due to its complexity. Each of the above methods has a fairly specific region where it is the most appropriate, but there is still a good amount of overlap. Methods 14 can be viewed as providing increasing power at the expense of simplicity and speed. Using statistical ranging to compute uncertainties on multiopposition orbits would be crazy, not unlike using a sledge hammer to drive a brad. Elegance requires that you use the most simple method that is suitable, but, frankly, method 3) will work reasonably well for virtually all cases. And, yes, Bill, it's really that simple! 6) Did I say five? Well, there is also the semilinear method. But you definitely don't want to go there. It's more complicated than multiple solutions! I'm just adding this to keep out of trouble with Andrea Milani. ;) Steve Chesley  Navigation & Mission Design Section, MS 301150 Jet Propulsion Laboratory Pasadena, California 91109 (818) 3549615, Fax: 3936388 E. L. G. Bowell wrote:
Bill:

