These instructions are essentially aimed at the Catalina Sky Survey folks. Some of it may help you out, especially if you're also running CentOS and/or are doing other things in a CSS-like manner. But if you aren't at CSS, you should probably click here for more 'general-purpose' instructions for building Find_Orb and related tools.
After talking things over with Alex shortly before I returned to Maine, I realized that the installation wasn't going to work in quite the manner I'd envisioned. I usually install the source code on each of my computers and build it there. Alex suggested that for CSS, it might make more sense to install the source code on one machine, build it there, and copy executables and files to other machines. That may make it a little easier to maintain and update the code on a variety of machines. It would be perhaps difficult if you had a lot of different distributions of Linux running. But it sounds as if you're just running CentOS, with slight differences in versions. I don't think binary compatibility will be a problem.
So the following will be split between "how do you download the source code and build it" and "OK, now that we've got it built on machine A, what bits do we need to copy to machines B, C, etc.?"
(If you're doing this on Chicxulub, or however you spell it, be advised that I did about 99% of this; skip to the "on Chicxulub" section. It might be a good idea to make that your "machine A".)
The tools are spread out among several Git repositories. The 'lunar' and 'jpl_eph' ones have the underlying code for 'integrat' and 'astcheck'. Other repositories then are needed to add 'sat_id' and both 'find_orb' and its non-interactive sibling 'fo'.
The procedure for building and installing any of these Git repositories is roughly similar, but there are some nuances.
First, be advised: to build the code, you'll need the 'ncurses-devel' package. (It's not needed to _run_ the code, just to build it. That is to say, only machine A needs to know about this.) If you end up using my tools for grabbing astrometry from MPC (more on those later), you'll also need the 'libcurl-devel' package (cURL being the library used for accessing files on-line). Actually, I'd install libcURL anyway; it's possible that future Find_Orbs will use it to update various files (the list of MPC stations, earth orientation parameters, etc.) Running
sudo yum install ncurses-devel sudo yum install libcurl-devel
will get you what you'll need.
Second, I suggest keeping the code for these tools in a convenient place, by doing something such as
mkdir ~/bill cd ~/bill
(which is where I put them on Chicxulub.) Next, 'git clone' the code. For simplicity, I'd just grab all the code, including 'miscell' :
git clone https://github.com/Bill-Gray/lunar.git git clone https://github.com/Bill-Gray/sat_code.git git clone https://github.com/Bill-Gray/jpl_eph.git git clone https://github.com/Bill-Gray/find_orb.git git clone https://github.com/Bill-Gray/miscell.git cd jpl_eph make libjpl.a make install cd ../lunar make make integrat make install
Note that you'll get some 'cannot create directory: File exists' and similar errors. These aren't actually a problem. I haven't figured out how to suppress the warnings yet.
The first 'make' for the lunar project builds a library of basic astronomical functions and some utilities based upon them, including 'astcheck'. 'make integrat' separately compiles and links the Integrat code.
For all projects, 'make install' puts relevant executables in ~/bin, some .h files into ~/include, and some libraries in ~/lib.
For 'sat_id':
cd ~/bill/sat_code make sat_id make install
For 'find_orb' and 'fo' :
cd ~/bill/find_orb make make install cd ~/.find_orb wget ftp://ssd.jpl.nasa.gov/pub/eph/planets/Linux/de430t/linux_p1550p2650.430t
(The last two lines put you into Find_Orb's configuration directory and download a JPL ephemeris file there. The file in question is 98 655 648 bytes, and covers years 1550 to 2650. All you have to do is to put the file there; Find_Orb will look for it when it starts up.)
At least for the nonce, I'm going to assume that CSS won't have much need for/interest in the 'miscell' project. It does contain four programs that may prove useful at some point or another :
Alex tells me that the machines at CSS are usually run from a single user account. If you have more than one user, you _can_ run 'sudo make install GLOBAL=Y' instead of 'make install'. "Normally", include files, libraries, and (most crucially from the end user perspective) executables would go into ~/include, ~/lib, and ~/bin respectively. With the GLOBAL=Y option, those files go into /usr/local/include, /usr/local/lib, and /usr/local/bin instead. That makes them available to everybody. But you do have to 'sudo' to do it.
Before I left CSS, I got _almost_ everything working on Chicxulub... except for interactive Find_Orb, because that machine lacked the ncurses-devel library. So you should run the sudo yum install commands listed above. Since I've made some improvements since returning from CSS, you should then use the following :
From time to time, as changes are made to the code, you'll have to run some or all of the following :
cd ~/bill/lunar git pull make make integrat make install cd ~/bill/sat_code git pull make make install cd ~/bill/jpl_eph git pull make make install cd ~/bill/find_orb git pull make make install
I say "some or all" because I won't necessarily have updated absolutely everything. If you run 'git pull' and no changes are pulled, you might as well skip 'make' and 'make install'. (The JPL ephemeris and satellite code libraries are pretty darn stable at this point and are rarely updated (though I did have to update the JPL ephemeris library recently to accommodate DE-436 and DE-436t), and the 'lunar' library of basic astronomical functions only gets the occasional update.)
You should copy the ~/.find_orb directory to the other machine, and the following from ~/bin :
astcheck find_orb fo integrat sat_id
The behavior of 'integrat' will be essentially that to which you have become accustomed. You specify an input MPCORB.DAT-formatted file, an output file name, and the epoch to which the elements are to be integrated. Optionally, you can specify the path to a JPL ephemeris file. For example :
integrat MPCORB.DAT new_mpcorb.dat 2017 Jul 13 -z3 -f ~/.find_orb/*.430t
would integrate the given MPCORB.DAT to 2017 July 13, putting the result in 'new_mpcorb.dat'. The computation would be split up over three processes; if you have three or more cores, it'll run about three times faster than it otherwise would have. Planetary positions for the numerical integration will be computed using the same JPL file as was copied above for Find_Orb.
If the output file already exists, Find_Orb assumes you're doing an update. It is bright enough to look through the existing file and say, while reading a (possibly very revised) input: "The following lines didn't change. This object has the same observations and orbit as before. So why bother re-integrating it? Just recycle the previous result."
This is somewhat safer than you might think; the program is conservative about what it's willing to recycle. However, in practice, the recycling can save a lot of time; most orbits don't update all that frequently.
The way you run this hasn't changed much. The main differences are that it can check multiple sources for possible identifications, and that it uses motion information, instead of just relying on how close your astrometry is to a predicted artsat position. As a result, it can run in an automated fashion: in the tests I ran, it did a basically flawless job of recognizing the artsat astrometry as belonging to artsats, without falsely flagging rocks as artsats.
You can run, for example,
sat_id astrometry.txt -t ~/tles/ALL_TLE.TXT
to have it check astrometry.txt against all satellites in ~/tles/ALL_TLE.TXT. However... as you know, ALL_TLE.TXT just gets you what JSpOC tracks and releases; to fully check artsats, you need data on classified objects and the TLEs that I've produced for high-fliers for which nobody else (as far as I know) produces elements.
I've recently started storing "my" TLEs on GitHub. I would recommend cloning this repository to your home folder, and then running git pull from time to time to get updates. This, of course, has the advantage that git will only grab updates.
If you have git-less machines, as I gather you do, you can either copy ~/tles from one machine to the rest, or download the ZIP-ball from the repository.
Note that the above contains only the TLEs computed by me. You will also need to download the ALL_TLE.TXT file, the integrated classified elements, and the 'full' classified elements, and unZIP them into ~/tles as well.
At that point, if you run
sat_id astrometry.txt -t ~/tles/tle_list.txt
sat_id will check artsats against pretty much everything we know about. (If you look at ~/tles/tle_list.txt, you'll see that about all it does is to list which TLEs ought to be run through.)
I have a cron job to grab ALL_TLE.TXT daily and the classified files weekly. The files I've provided for high-fliers run out for (in most cases) the rest of this year, and longer for some more stable objects.
This should be much the same as before, except that the program will look for all configuration and ephemeris files in ~/.find_orb and can be run from anywhere... should be more convenient. Also, Alex mentioned that the program was hanging on some input; I saw that too, a couple of years ago, but haven't seen it in an extra-special long time.
As before, you can run either of
find_orb (input astrometry file) fo (input astrometry file)
Important tip: Add the -c command line option to either program to "combine" all observations in the file, so they're treated as if they are of a single object. Kinda handy when you're wondering if object X is the same as object Y, and you don't want to have to edit designations.
For fo (but not find_orb), one can add the -p(number) option to tell the program to split the objects over (number) cores. Probably not something you'll need to do; I found it very handy when, say, processing a few hundred thousand objects from the MPC's UnnObs.txt file.
Also fo-only : by default, the output to the screen uses some colors to draw your attention to unusually low MOIDs and comet-like eccentricities and inclinations. This is fine on-screen, but if you direct it to a file, you'll collect escape codes that would confuse anything trying to process the file. Add the -v command line switch to suppress these codes.
(Note to self: dump residuals, in RA/dec and time and cross-track and in magnitude, to a standardized, computer-friendly format suitable for Multicheck and whoever else could use it)