Timing of galfa time stamps

03nov04

    The galfa spectrometer uses a 1 second tick interrupt to time stamp the data samples. The interrupt occurs, a flag is set, and then a thread is run to record the ntp time. The time is recorded in the variable g_time[2] with the first number being the seconds from 1970 (gmt) and the second number the useconds since the tick. The time stamp can be off because the ntp time has drifted and/or the thread that records the time stamp does not run immediately after the one second tick. To get the start time of each integration, the double precision number g_time[0] + g_time[1]*1d-6 should be rounded to the nearest integral second. This will work as long as the delay from hardware tick to time stamp recording is less than .5 seconds (and the ntp time has not drifted too much).
    Galfa data was examined to see what the spread in these time stamps were. Files from 29,30,31Oct04 were used to measure the spread in the time stamp. Each file had 600 contiguous 1 second samples (except for one
file that had a jump in time). For each file the processing was: The plots show the variation of the time stamps (.ps)  (.pdf) for the 30 files measured on the 29 (black), 30(red), 31(green) of oct04. It is interesting that some of the jumps occurred at the same relative file number night to night. It may have something to do with the network activity (backups??) done at the same time of night. The drifting of the time stamp over a night may be do to the ntp drift (although the drift seems large for ntp). The 600 millisecond peaks mean that you can not just round the g_time[0] + g_time[1]*1d-6 to the closest integer and assume you are within .5 seconds of the correct value.
 
processing: usr/a1943/timing/chktime.pro


home_~phil