**Update**
OK, so it has been a while, But I have finally gotten around to making a clock using the DIY RTC methods described here. The next post is titled "Nixie Tube Clock" and not Real time clock - part 2 as one would expect. Never the less, here is an example of a clock using the method described. http://petemills.blogspot.com/2015/05/nixie-tube-clock.html
Firstly, this is going to be a microcontroller based implementation of a real time clock. Here, "Real Time Clock (RTC)" means only: a device used to keep track of time in human readable units. Basically, its a clock clock. Since my finished clock will have other tasks to perform in addition to keeping track of time, such as sounding an alarm or initiating an elaborate rube goldberg machine responsible for starting my car in the winter months, making my 1Hz timebase on the uC is a logical choice.
Independent RTC IC's:
I didn't immediately dismiss the idea of separate IC RTC's. They look so promising! They keep track everything one could hope for! The last statement is A) wrong and B) misleading if it were true. From the DS1307 product page: "Real-Time Clock (RTC) Counts Seconds, Minutes, Hours, Date of the Month, Month, Day of the week, and Year with Leap-Year Compensation Valid Up to 2100". But, wait a minute, if I have an accurate timebase inside my microcontroller, several lines of code later you can have all that functionality, without adding an extra part or adding the arguably small price to the project. I think the DS1307 is about $1.00. Besides, I am making a clock; if I wanted an off the shelf solution I'd hop in my Koenigsegg and head over to IKEA for a Slabang and some meatballs.
Another nail in the separate IC RTC coffin is that the DS1307 uses a standard 32768 Hz watch crystal to run it. If I really wanted to use a watch crystal (which I don't, particularly), it would seem obvious to hook it up to the asynchronous timer on the ATMega328 that I am using for clock development. The accuracy of the DS1307 is dependent on the accuracy of the watch crystal being used and matching the load capacitance on the board and the load caps with the spec in the crystal datasheet. In other words, that 32768 Hz crystal is not going to be ticking over at 32768 Hz and there is no way to compensate (calibrate) for that except using a trimmer cap and a frequency counter to adjust the frequency, at the temperature of the crystal during calibration. If instead you had a watch crystal on the asynchronous timer of the ATMega328 you could implement a software calibration, but more on that later.
I am picking on the DS1307 because it appears to be so popular. There are many other RTC IC's but for illustrative purposes that is the one I chose. To get around some of the issues of accuracy outlined above you could use the DS3231. It has a TCXO and claims accuracy of +/- 2 ppm. +/- 2 ppm is getting pretty good (we can do better). That equates to about +/- 1 second in 6 days. But, they are expensive. At about $9 they are twice what an ATMega328 costs. And for some of the same reasons the DS1307 isn't that great of a deal, neither is the DS3231. Basically, the allure of the DS3231 is in its accuracy of +/- 2 ppm; as way to avoid having to write some time keeping code it too goes against my geeky DIY nature.
Daylight Savings Time:
Daylight savings time is a pain in the neck for a clock designer. Within the US not all states abide by daylight savings time and international protocols for DST do not, to my knowledge, exist. The text from http://aa.usno.navy.mil/faq/docs/daylight_time.php that says "Starting in 2007, daylight time begins in the United States on the second Sunday in March and ends on the first Sunday in November." illustrates the difficulty in programmatically adjusting for DST, but more notability, the fact that DST protocols can and do change. Imagine having recently finished your clock design. One where it will dutifully adjust for DST for you, only to find out the accepted standard for when DST changes, has itself, changed. You are then faced with rewriting your code for the new standard or adjusting, no longer twice a year for DST, but four times, assuming both the adjustment forward and backward have changed dates. Because of these issues, I have decided to forgo automatic updating of DST on clocks that are self contained, that is, on clocks with no external time synchronization such as NTP, GPS, WWVB, etc, I will instead, adjust the clocks time manually on DST.
Leap seconds:
Leap seconds are adjustments made to UTC (Coordinated Universal Time) because cesium fountain time standards in use are more stable than the Earth's rotation around the sun. This Copernican anomaly may ruffle the feathers of geocentrists, but every year on June 30 and December 31, the International Earth Rotation and Reference System Service may or may not issue a leap second depending on necessity. The last one was added December 31, 2008.
Here is a highly gratuitous photo of some nerd bling aka LED edge lit plexi glass.
With these constraints in mind I decided it is better to have a stand alone clock that you are able to adjust the time on rather than have a clock with high accuracy, set to run an entire life cycle, only to find that some adjustment outside of my control adjusts UTC. Which brings me to acceptable accuracy.
I define acceptable accuracy as some balance between cost and external constraints. For example, I have decided not to automatically adjust for daylight savings time which occurs twice a year, because it can change. Currently it is on the second Sunday in March and on the first Sunday in November. Since this is a maximum time of about 8 months between setting the time. I only need to be acceptably accurate for up to 8 months. I think I can live with anything up to 1 minute off. The only real way to know if the clock is displaying something other than correct time would be to compare it to NTP on a cell phone or computer. This is an arbitrarily chosen value but it should give me a start. One minute in 8 months is approximately 2.9 ppm. So, my goal will be to calibrate my clock to better than ~3 ppm.
When I first started this project I thought I would slap a 32768 Hz watch crystal on a uC and call it a day. Well I did that, and wrote the code to keep track of hours, minutes and seconds in a day and set it running. It was off by -4 seconds in the first 24 hours. I checked the code, double checked the load capacitor values and found nothing wrong. I believe the stray capacitance on the board and the uC pins was pulling the frequency of the watch crystal ~-46 ppm. Well beyond the +/-20 ppm tolerance of the part. Clearly this was unacceptable. I needed to buy a variable capacitor to tune the load capacitance so that the frequency was within my tolerance of ~3 ppm.
But, instead I started thinking of software solutions. Code is free for me to type and I don't have to wait for it to arrive from DigiKey. I first thought of adding 4 seconds to the display at say 2 AM when I should be fast a sleep. This idea bothered me on many levels, but using the 32768 Hz crystal I had setup my asynchronous timer for an interrupt fired at 1 Hz. What was I to do but get more resolution...
Here is another photo of the LED edge lit plexiglass. Shown only to break up all of those words in this blog post.
Selecting a new crystal oscillator I picked a higher frequency 16384000 Hz crystal with a +/- 10 ppm over -20C to 70C range thermal stability. Digikey part number 887-1245-ND. They are currently $0.48 each in quantity of 10. I am much more interested in a crystal's frequency as a function of temperature than out of the out of the box accuracy since I will be calibrating the clock in software which you can read about below. This type of crystal has a different cut on the quartz compared to a watch crystal too. A watch crystal will always lose time when its temperature is above or below 25C, where as my crystal has a frequency vs temperature curve that is a sine function with its origin at 25C. That is, above 25C it has a higher frequency and below 25C it's frequency is lower.
So, I've ended up with an 16384000 Hz crystal and setup my interrupt service routine (ISR) to fire every millisecond. Now, I can adjust in software on the mS level and it will be totally transparent to the clock user as current plans call for seconds display as optional. Even if I were to display milliseconds you couldn't read them fast enough to notice the adjustment.
To make the adjustment you just add a millisecond every x-number of (1mS) ISR cycles. For x = F_CPU / ( F_CPU * error_in_ppm ). In my case I had my ATMega328 running on a 16384000 Hz crystal. I set the CKOUT fuse to output the system clock frequency on pin 14 to measure the undivided system clock on my frequency counter and got 16383480. A difference of -31.7 ppm. since x = 16384000 / 16384000 * -31.7x10^-6 = 31508 (rounded up) , every 31508 mS aka every 31508 ISR cycles I add (because it was running slow) one millisecond.
Here is a snippet of my ISR. Don't do this. I just threw this code together for testing my clock ideas. In practice you want to handle everything other than updating the millisecond counter outside of the ISR. I will make that change when I write my final clock program.
//This interrupt is called at 1kHz ISR(TIMER1_COMPA_vect) { static uint16_t milliseconds = 0; // mS value for timekeeping 1000mS/1S static uint16_t clock_cal_counter = 0; // counting up the milliseconds to MS_ADJ const uint16_t MS_ADJ = 35088; // F_CPU / (F_CPU * PPM_ERROR) const uint16_t MS_IN_SEC = 1000; // 1000mS/1S milliseconds++; clock_cal_counter++; if( milliseconds >= MS_IN_SEC ) { milliseconds = 0; ss++; // increment seconds toggle_led(); // toggle led if( ss > 59 ) { mm++; // increment minutes ss = 0; // reset seconds } if( mm > 59 ) { hh++; // increment hours mm = 0; // reset minutes } if( hh > 23 ) { // increment day hh = 0; // reset hours } } // milliseconds must be less than 999 to avoid missing an adjustment. // eg if milliseconds were to be 999 and we increment it here to 1000 // the next ISR call will make it 1001 and reset to zero just as if it // would for 1000 and the adjustment would be effectively canceled out. if( ( clock_cal_counter >= MS_ADJ ) && ( milliseconds < MS_IN_SEC - 1 ) ) { milliseconds++; // it may be that clock_cal_counter is > than MS_ADJ in which case // I want to count the tick towards the next adjustment // should always be 1 or 0 clock_cal_counter = clock_cal_counter - MS_ADJ; } }
Next I ran my clock again. I synched it's time to NTP and checked it again daily. After 6 days however, I found that my clock was about 1 second fast compared to NTP. Recall above that I said my frequency counter was ~-2 ppm off well, this is how I found this out. Had my frequency counter been calibrated to be spot on, my NTP test would have revealed I am only off in time by rounding errors in my ppm error calculations above. Or approximately -0.3 ppm.
My results thus far are acceptable and fit into my arbitrarily defined tolerance. In my research however, I came across a program written by an AVR freaks member that you can use to check the frequency of a clock if you don't have a frequency counter. I figured this would be good information to relay for those without frequency counters and who want to try and replicate what I have done here. You can find the program here.
Using the same 16384000 Hz crystal on my avr I setup an ISR at 1 kHz that toggles an output pin. I connected that pin to the RxD line on an FTDI cable and ran the program. Some samples were wildly wrong as the author suggests could happen so, I discarded those and averaged the remaining 17 Hrs of data to come up with an error of -28.5 ppm. Subtracting my new error from my frequency counter calculated error above yields (-31.7) - (-28.5) = -3.2 ppm. I expected this to be approximately -2 ppm, however, there were a lot of indirect methods used here and a bit of rounding. Not the least of which is the ~1second/6days estimation when comparing my clock to NTP with visual inspection. Having said all that, I am more apt to believe the -28.5 ppm error from the Network Frequency Transfer program at the moment and say my frequency counter -3.2 ppm off.
FTDI cable RxD line on uC pin 14 |
Going back to the calibration value calculations above you will see my new MS_ADJ value will be x = 16384000 / 16384000 * -28.5x10^-6 = 35088 (rounded up) or every 35088 mS aka every 35088 ISR cycles I add (because it was running slow) one millisecond.
That is a lot of theory, calculations and measurement. The proof really, is in the pudding. Real world application of the code and derived calibration value is what is needed now. Currently I have the clock running is a room that gets arbitrarily cold at night and warms up during the day. Just like it will see in operation. I synched it to NTP when I started it and I will check the deviation daily.
So, it has been 3 days and 4 hours since I started my clock synched to NTP and there is no visual deviation from NTP. The way I check this is I have an app on my phone that gets NTP and displays it with 1 second resolution. First, I check that my app and the clock are displaying the same value down to 1 second and then with my peripheral vision watch an LED on my clock toggle at 1Hz while I am watching the app on my phone. I have found during these experiments that using this method I am able to detect a deviation of as little as 50mS quite easily. This is possible only at the change of the seconds value i.e. the NTP app ticks over and I can see the LED turn on slightly before or after. I could not estimate say a half second or a quarter second with the same 50mS resolution.
Back to the numbers. As I said it has been 3 days and 4 hours since I started my clock run and I cannot see any deviation in clock time v.s. synched NTP time. If I say I cannot see deviations less than 50mS I will assume the worst case scenario and say it is 50mS off. ( 1 / (run_time / error) ) * 1x10^6 = error_in_ppm so, ( 1 / ( 273600 seconds / 0.05 seconds ) ) * 1x10^6 = 0.18 ppm. That is pretty darn good and cheap too boot!
This is far from a clock yet. Yes, it tells time in human readable units, but I still need to implement a calendar, power outage protection, alarm functions and make it do something cool. Maybe display moon phases? Time will tell...
I really learned a lot through this project up to now. I certainly did not end up with anything close to how I imagined I would implement a clock and I am rather pleased with the outcome and deeper understanding of timekeeping. My hope is that I was able to present some of the information I learned in a way that it can be useful to others too. I think "Part 2" will have a more refined clock-like appearance to it both in software and physical implementation.
Comments
Good luck on your project. I would love to hear your results!
@jordanda
I bought my VC3165 from one of the stores on eBay. I don't recall which seller at the moment but, it was one of the "buy it now" option.
Then you will build a GPSDO in order to check that. To check the GPSDO you acquire an atomic standard, only to discover than you need at least 3 of those.
/Kasper Pedersen (NFT)
Thanks for your comment and thanks for your NFT program too! It was a good discovery during this build.
After building a clock myself I can see how difficult it is to keep "accurate" time.
You mention measuring crystal temperature and compensating for fluctuations. This would be a TCXO? Or, do you mean make changes in software to allow for the temperature drift. On the subject of TCXO's, would you buy or build your own?
-Pete
Since you do not need an exact high-frequency output to some other piece of equipment, you can build your own. It will take you a few hours to measure and implement the required compensation (in software).
At the same time, if you can pilfer a TCXO from something, do so. The oscillator inside the AVR was not designed to have sub-ppm stability, and is sensitive to VCC and what code is executing. Then you can play the same compensation game on the TCXO to get even better, as that too will have temperature dependency.
So the answer is 'both'.
http://pcbheaven.com/circuitpages/Voltage_Controlled_AC_Light_Dimmer/ and http://pcbheaven.com/circuitpages/PIC_DCV_Controlled_AC_Dimmer/ show how to implement this with a transformer, a bridge rectifier, a couple transistors, a capacitor, and some resistors.
Thanks for your suggestion on using power line frequency as a calibration method.
I have thought about using this method before but, my hang up is the short term stability. I read that where I am frequency can drift to accumulate 10 seconds before a correction factor is added. This would be ok, as you said, over the long term but only if your sample time is synchronized with the start/stop of the utility company adjusting their frequency.
The research continues... Or, I could just buy a calibrated frequency counter. ;)
I'm also in the process of making a clock using an Atmega8 and a 32khz crystal. I've heard about TCXOs like the DS32khz. Is there any way that I can link up the CLKOUT of a DS32khz to the asyncronous timer of the AVR? Also how accurate would the DS32khz be?
Glad to hear you are making a clock. I hope you will find it a rewarding experience and learn lots of new things along the way.
To answer your question about hooking up a DS32khz to the asynchronous input on the mega8 I will refer to page 120 of the mega8 datasheet which says "The Oscillator is optimized for use with a 32.768kHz crystal. Applying
an external clock source to TOSC1 is not recommended." So, although it may work, I would not do this as the DS32khz is not a crystal oscillator. Yes, it has a crystal oscillator in it but its output is not the same as a crystal.
As to the accuracy of the DS32khz, it's datasheet says it is accurate to +/-2 ppm which is approximately +/- 1 minute per year.
Good luck and let me know how your project turns out!
It is hard to say what is going wrong with your setup without a look at your code and schematic. Having said that, I would take a look at your RTC datasheet and see if you need to provide a battery backup to it for persistence of time setting.
-P