Discussion:
TotalSampPerChannelGenerated
(too old to reply)
Kevin Price
2006-01-24 20:13:32 UTC
Permalink
I'm using an M-series 6259 under LV 7.1.1
 I've set up a hw-timed DO task to continuously generate a short digital pattern (less than 100 states).  After starting the DO task but before starting the sampling clock, the DAQmx property "TotalSampPerChannelGenerated" returns the value 2047.  After I start the sampling clock, it seems to increment properly, even if the sampling clock task is re-started multiple times to generate multiple sequences of hw-timed DO. 
 
I've tried generating the sample clock both as a Counter / Timer finite pulse train and as a finite-sampling AO task and observed the same result.  I also upgraded from DAQmx 7.4 to 8.0 but again, the behavior didn't change.  I tried making the DO task a finite-sampling task, but the behavior was worse -- the initial offset was a different non-zero number which did not increment as the sample clock ran.
 
Here's the app: I have a master list of all the DO patterns to be generated.  For any given run of the overall app, I may start from any index in this master list, and then traverse them incrementally either forward or backward.  Meanwhile, I'm also capturing DI bits off the trailing edge of the same sampling clock.  The app needs to verify that the DI pattern is always "correct" for the given DO pattern.  To do this, I planned to keep track of the starting index and direction and then use the "TotalSampPerChannelGenerated" to let me look up the corresponding DO pattern in my master list.
 
Trouble is, can I trust the "TotalSampPerChannelGenerated" property?  If I could know for sure that it will ALWAYS have an initial offset (such as 2047), fine, I can establish that value at the beginning of the program and subtract it off every time I query.  But the fact that it gives me a "goofy" result makes me trust it less.  Soooo......
 
1. Can anyone else confirm this behavior?  Does everyone get the same value -- 2047?
2. Can anyone from NI explain the behavior?  Bug?  Intentional -- if so, why?  Can I count on that offset?  Even after a 32-bit count rollover?  (Some tests may run long enough for this to occur).
 
-Kevin P.
 
Kevin Price
2006-01-24 23:13:17 UTC
Permalink
Forgot to include example as attachment in previous post...
 
(Set to run on M-series device with Analog Output, configured as Dev1.)
 
-Kevin P.


DO driven with AO clock - example.vi:
http://forums.ni.com/attachments/ni/70/4313/1/DO driven with AO clock - example.vi
Kevin Price
2006-02-02 21:41:24 UTC
Permalink
Part 1 of 2 due to 5000 char limit

Have wanted to explore M-series DO more before posting back but have been tied up in non-hw parts of the app.  I'll intersperse my comments in context:

reddog: for subsystems that don't have their own timing engine (such as correlated digital output), there is no way for the device to know how many samples have been generated.  In this case, TotalSampPerChannelGenerated represents the total number of samples that have been written to the device...  query the Output.OnbrdBufSize property, you'll see that the digital output FIFO size for the 6259 is indeed 2047 samples deep.
OK, I can understand this.  I think I remember that selecting Finite Sampling mode in the example caused the value to be 1000 instead of 2047.  Maybe this is just based on the default buffer size for Finite Sampling?  (I don't remember if I wired the buffer size explicitly in the example, and don't have LV on this network PC.)  If so, it makes similar sense. 

..it's not clear if you're snapshotting the progress while the generation is running...
Yes.  The app is basically a motor driver with verification.  I need to generate a timed pattern of 6 bits to control transistor switching, and am reading back 24 bits for verification.  The expected input 24-bit pattern sequence depends on the output 6-bit pattern sequence and the logical combination of some other static DIO bits which do not change within any given run.  I'll describe a few gory details just because it may help in finding the best solution -- my particular app is not nearly as demanding as others I can easily imagine.
 
The timing needs are fundamentally driven by a requirement that there needs to be a certain minimum delay time between turning any one transistor off and another one on (magnetic field collapse, inductive kick, etc.).  Let's call it 100 usec.  The other timing requirement is that the motor speed depends on how rapidly the various transistor switchings are sequenced.  For now, let's assume a constant speed with 5000 usec between states.  (This value remains constant throughout any single run, but can vary from one run to the next).
 
There are a total of only 12 unique 6-bit patterns to produce, which keep repeating as needed.  6 of them represent the pattern during one of the 100 usec intermediate states.  The other 6 represent the "stable states" that each remain constant for 4900 usec. 
 
My implementation plan was to tell the DO task to do Continuous Generation and then control the actual # of samples generated with the sampling clock I specify.  That'll probably come from the AO subsystem set for a Finite Generation.  I would plan to use the 100 usec delay time to set the update rate at 10 kHz, thus I would only support overall switching times that are an integer multiple of 100 usec (like the 5000 usec mentioned previously).  This limitation is acceptable to others on the project.
 
So altogether I would need a 300 byte circular buffer that I would write once and then let it be regenerated repeatedly for the correct total # of samples.  The total # of samples will always represent an integer # of switching intervals, which in this case means an integer multiple of 50.  Typical values may range anywhere from 50 - 15000 total samples, using only multiples of 50.
 
-- to be continued --
 

-Kevin P.
Kevin Price
2006-02-06 19:40:50 UTC
Permalink
reddog,
 
Thanks again for the detailed help & descriptions.  I think I know exactly which way to go now.
 

If I understand correctly, your buffer will hold one sample of the 100 usec pattern and 49 samples of the 4900 usec pattern.  What's still unclear is when and how you plan to change the buffer contents from one set of patterns to the next set of patterns in the sequence of 12.  Do you plan to generate the 50 - 15000 samples with the same DO buffer while using the finite AO task as the clock source, rewrite the DO buffer to the next set of data after the AO task completes, restart the AO task, and continue to repeat this process, or do you plan to update the DO buffer throughout the 50 - 15000 sample generation?
The app will not need to update the DO buffer throughout the 50-15000 sample generation.  The sequence of 12 patterns is known ahead of time and it would go: 49 samples of pattern 1, 1 of pattern 2, 49 of pattern 3, 1 of pattern 4, 49,1, 49,1, 49,1, 49,1.  The entire sequence can be represented as a total of 300 samples which will keep regenerating in hw as needed.  There'll be special handling for cases with fewer than 300 total samples.
 
See?  It's not nearly as tough as it might have been.  I don't need to make decisions on the fly to choose a sequence of patterns to generate, nor do I have a large set of unique patterns to contend with.  Probably most importantly, by generating a pre-known sequence I don't have to concern myself with latency.
 
Thanks for the suggestion of using the AO task's TSG property -- I'd like to think I'd have thought of that myself sooner or later, but who can say?  It's especially important to be aware that TSG *might* give me any value from 0-2047 if I query it before the board has finished copying data from system RAM to its on-board FIFO.
 
-Kevin P.
Kevin Price
2006-02-02 21:41:24 UTC
Permalink
Part 2 of 2 due to 5000 char limit
 
Rewinding a bit, remember that there are only 12 unique 6-bit patterns to be generated.  I hoped to use change detection for my 24-bit pattern input to reduce the data processing load.  Rather than acquire off the trailing edge of the same clock at 10 kHz, I would instead capture patterns at the bit change rate which averages 400 Hz.  (2 transitions per 5000 usec).  My hope was (is) to query the DO task for TotalSamplesGenerated (TSG) "simultaneously" with querying the DI task for TotalSamplesAcquired (TSA, or whatever the actual name of the property is).  From TSG, I could determine what my output pattern must presently be.  That in turn would tell me what I should expect TSA and my input pattern to be.  I would continuously monitor that TSA and the actual input pattern do indeed match expectation.  I will probably need a small fudge factor to allow for software latency, but that's for a little further down the road...
 
Maybe, in retrospect, I should consider simply sampling the DI at 10 kHz.  Then in principle I could verify that it matches expectation without any reference to the TSG property of the DO task.  The entire expected sequence of 24 DI bits can be known at the beginning of each run.   Hmmm....
 

..can you use the Current Read Position from the DI task as your index into the master list instead of the TotalSampPerChannelGenerated property? 
If I go ahead and clock in the DI instead of using Change Detection, then this suggestion should work well.  Also, it wouldn't be a very big hardship to simply store the initial value of TSG (== 2047) and then subtract it off from all subsequent queries.  I generally make self-contained "Action Engine" modules for all the hw tasks, and could easily put it in there.  I just wasn't comfortable trusting this approach before understanding where the 2047 from. 

 
Instead of regenerating data, can you disallow regeneration of data and manually track how many points have been written to calculate your index? 
I don't think this will be my preferred approach due to all the DI processing I'm already committing the CPU to.  Still, for curiosity, can you describe this a bit more?  Do you mean keep track of # points written via DAQmx Write? Or do you mean that the TSG property would start from 0 (or perhaps some very small number) instead of 2047?  Just looking to learn something...
 

is your master list really just a giant buffer that you begin generating data from within at a random position and then automatically increment from there
As described earlier (though not briefly and perhaps not clearly), yes.  Except the buffer will actually be pretty small and I'll allow regeneration.  So my problem could certainly be a lot worse!
 
Thanks again for the help!
 
-Kevin P.
Kevin Price
2006-06-13 15:40:11 UTC
Permalink
Once again, thanks for all the previous detailed help.  I've got everything up & working properly so I figured I'd update & close the thread out.
I had to abandon the "change detection" approach for reasons I failed to anticipate -- signal propagation time differences.  The 24 input bits I wanted to read were NOT synchronized by our external hardware.  The signal propagation times were just different enough that I would sometimes get 2 or even 3 different change-detection events from 1 output pattern change.  The 2 or 3 changes were all in response to the same 1 output pattern change, but the transitions weren't quite in sync.
So I decided to try to simply use fixed-rate hw clocking instead.  I setup my correlated DIO clock at a 90% duty cycle, generating DO patterns on the leading edge and measuring DI patterns on the trailing edge.  I found that I was able to perform the necessary pattern matching at 10 kHz without drastically bogging the system down.  I've stuck with constant-rate sampling ever since, with the added benefit of making it much simpler to correlate the output and input patterns to one another.
-Kevin P.

GDE [DE]
2006-01-25 23:13:31 UTC
Permalink
Kevin,

I was able to verify on several computers that in your example, TotalSampPerChannelGenerated did
initialize to the value 2047 and count properly from there after the
task begins. Why this is, I do not know, but it does look constistent.
I will dig into this issue for you and let you know what I can find.

-GDE
Loading...