Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Blarged

macrumors newbie
Original poster
Apr 14, 2008
24
0
I am given the:
  • Day
  • DayOfWeek
  • Hour
  • Milliseconds
  • Minute
  • Month
  • Second
  • Year
I need to set the OS X system clock to this time. I can do this through C++ system calls or shell calls (date). I have it from the date command all the way to the second, but the grief is from the milliseconds. Is it possible to set the system clock to the millisecond?
 

lee1210

macrumors 68040
Jan 10, 2005
3,182
3
Dallas, TX
I don't know how to do this, but it piqued my interest. Where are you getting time to this level of accuracy? Via a very low latency network source? Via radio from an atomic clock? Perhaps your interest is purely academic, but I am curious if there is a practical use for this.

-Lee
 

gnasher729

Suspended
Nov 25, 2005
17,980
5,566
Just in case you find no better way, you can always just wait the right amount of time and then set the time in seconds.
 

lee1210

macrumors 68040
Jan 10, 2005
3,182
3
Dallas, TX
Just in case you find no better way, you can always just wait the right amount of time and then set the time in seconds.

This seems like the most likely way to get this to work, but there are definitely pitfalls.

Just as an example, say the time you get from your source (assuming you have a source that can reliably give you centisecond precision, much less millisecond precision) of 17:54:23.11. So logically you want to wait .89 seconds, then set the time to 17:54:24. So how would we wait for .89 seconds?

There's no easy way I know of in standard C, but say our system provides sleep(), usleep(), or even nanosleep() with second, microsecond, and nanosecond resolution. Well, sleep() is out of the question, not precise enough. So what about usleep(), it should be fine, 89 centiseconds is 890000 microseconds, so we should be able to usleep that long, then set the time, right? Sadly, not really. From the usleep man page:
The usleep() function suspends execution of the calling thread until
either microseconds microseconds have elapsed or a signal is delivered to
the thread and its action is to invoke a signal-catching function or to
terminate the thread or process. The actual time slept may be longer,
due to system latencies and possible limitations in the timer resolution
of the hardware.

So we know that AT LEAST 890000 microseconds have elapsed. Or maybe it was 1000000 microseconds. We just know it was no less than 890000. Hm. That's less than ideal. This then breaks down to trying to usleep() in 100 microsecond intervals or so, and checking the time with localtime() or something similar. Of course, just doing those actions takes a non-zero amount of time, so you're never really going to be sure. I suppose with this method you could get closer than trying to sleep the exact amount of time, but you're unlikely to get within a centisecond much less a millisecond.

I guess if I had to try to get this as close as possible, i'd get the localtime() right when I got the "real" time from the external source, and see how many microseconds away from the next "real" second I was. I'd then just busyloop localtime() and difftime() until I was within about 5 milliseconds to 1 centisecond of the "real" date to the second and then call date to hope that it takes about that much time for the set to actually take affect. You'd just have to hope for the best in terms of the CPU interrupting your program between finding you're within 5 milliseconds and you setting the date. You could get another reading from your "real" date source and compare it to localtime afterwards to ensure you got within an acceptable threshold, and if not try again.

I wouldn't normally recommend busylooping ever, but in this case it would be for at most 1 second, and sleep failing to give any guarantee of return near the time you specify forces your hand.

-Lee
 

Blarged

macrumors newbie
Original poster
Apr 14, 2008
24
0
I don't know how to do this, but it piqued my interest. Where are you getting time to this level of accuracy? Via a very low latency network source? Via radio from an atomic clock? Perhaps your interest is purely academic, but I am curious if there is a practical use for this.

-Lee

The source is a "very low latency network source" but I am not convinced of the accuracy either due to the latency.

I might be able to calculate the latency by pinging the client then using those 'estimates' to send a more realistic millisecond quality time.

gnasher729 said:
Just in case you find no better way, you can always just wait the right amount of time and then set the time in seconds.

This sounds like it might be the way to go. I am sad that we seemingly can't set the time to the millisecond, but the wait for the second seems like it might be the way to go. Just have to figure in the latency of the network and the overhead of the wait.
 

iSee

macrumors 68040
Oct 25, 2004
3,540
272
I don't have anything to add to your actual question but:

What are doing this for? Depending on what you are doing, there might be a better wat to accomplish your goal.
 

Blarged

macrumors newbie
Original poster
Apr 14, 2008
24
0
I don't have anything to add to your actual question but:

What are doing this for? Depending on what you are doing, there might be a better wat to accomplish your goal.

I am just on the client side. I am given a string of that information and need to set the os x client to that time. I have verified that network latency is accounted for in the time sent to the client. So I will be going with the 'wait' for the 000 millisecond moment.
 

ChrisA

macrumors G5
Jan 5, 2006
12,919
2,172
Redondo Beach, California
You can do much better then just to the millisecond. You can get to within a few microseconds. The key concept is that you are not "setting" the time. That can never work because the "set" operation takes an unpredictable amount of time to complete. So if you do get it right it's only by luck. The way it works is that first you do "set" it and then you watch the time to see how well it stays in sync with a good known clock. then you make a fine adjustment by adjusting the rate faster or slower. What is a know good clock? Typically you'd use a et of them and take an average from the subset that seems self-consistent. The key here is to never "jump" time to a new setting. If you need to set it back slightly you slow it down and wait

The above is a lot of work to get right. But you don't have to. You can run NTP and your clock can stay set to the sub-millisecond. NTP ships with Mac OS X but by default it's configuration is very primitive. NTP also runs on Apple's routers. With some effort and study you can get to much better then the ms level. See the below link.

http://support.ntp.org/bin/view/Support/WebHome
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.