Page 1 of 2 12 LastLast
Results 1 to 10 of 12
  1. #1
    Power User
    Join Date
    Feb 2006
    Location
    MI / MA
    Posts
    3,594

    Default What causes severe variation in consistency?

    I was looking over the GOL tests again today, and one intriguing thing I noticed (particularly with Cobra's units) is a lack of consistency in performance between test runs. Sometimes one test run will be like 1/3 the range compared to other runs. I'm curious what, on the technical side, would explain a radar detector performing this inconsistently in a fairly consistent testing condition (judging by the consistent performance of top-notch detectors)?

    IMO inconsistency in detection behavior is almost as bad as tone-deaf sensitivity in the first place -- it instills a false sense of trust in the unit.

  2. #2
    Radar Fanatic
    Join Date
    Mar 2005
    Location
    Cleveland, OH
    Posts
    1,561

    Default Re: What causes severe variation in consistency?

    Quote Originally Posted by jdong
    I was looking over the GOL tests again today, and one intriguing thing I noticed (particularly with Cobra's units) is a lack of consistency in performance between test runs. Sometimes one test run will be like 1/3 the range compared to other runs. I'm curious what, on the technical side, would explain a radar detector performing this inconsistently in a fairly consistent testing condition (judging by the consistent performance of top-notch detectors)?

    IMO inconsistency in detection behavior is almost as bad as tone-deaf sensitivity in the first place -- it instills a false sense of trust in the unit.
    Any kind of testing will produce inconsistency. It goes away with the more test runs that are done.

    If you flip a coin 3 times, you may well get heads all 3 times. That does not mean your coin favors heads. Do the test 300 times and you'll probably get much closer to a 50% heads 50% tails distribution. I think what we are seeing is normal testing variability.

  3. #3
    Power User
    Join Date
    Jul 2005
    Location
    My Home
    Posts
    3,169

    Default Re: What causes severe variation in consistency?

    Quote Originally Posted by jdong
    I was looking over the GOL tests again today, and one intriguing thing I noticed (particularly with Cobra's units) is a lack of consistency in performance between test runs. Sometimes one test run will be like 1/3 the range compared to other runs. I'm curious what, on the technical side, would explain a radar detector performing this inconsistently in a fairly consistent testing condition (judging by the consistent performance of top-notch detectors)?

    IMO inconsistency in detection behavior is almost as bad as tone-deaf sensitivity in the first place -- it instills a false sense of trust in the unit.
    I think you have to be specific on which test you are pertaining to since it is too general. If you look at the straight line test, you'll get consistency. But if you look at the around the curve and forward facing tests, you will get inconsistencies because of bounce and reflected signal. Plus you have to consider that the test vehicle is moving. A moving vehicle alone can cause a number of inconsistencies.

    I would like to see GOL's future tests being done in stationary mode - test vehicle stops at first alert and move back a bit farther and stop until it gets to the farthest point where is consistently alerts you with the weakest signal for about 10 seconds and mark the spot. This eliminates the filtering factor if that theory holds true with the Belscort products.

  4. #4
    Power User
    Join Date
    Feb 2006
    Location
    MI / MA
    Posts
    3,594

    Default

    For example,


    Where the XRS9930 got 1/3 the range of the other test runs on one around-the-curve run. Some of the high-end ones also seem to have completely missed it.


    Testing variability is one thing, but the variance of the sampling distribution is in fact tied to the variance in the detector's detection distribution (the exact formula that relates the two escape me -- it's been several years since a stats class for me )

  5. #5
    Old Timer
    Join Date
    Apr 2007
    Location
    New Jersey
    Posts
    6,771

    Default

    yea there are a lot of variables

  6. #6
    Power User
    Join Date
    Feb 2006
    Location
    MI / MA
    Posts
    3,594

    Default

    What are the variables involved though? That's what I'm interested in listing. I want to see how much of that is "testing artifacts" and how much is "your detector tends to alert inconsistently to X kind of encounter"

  7. #7
    Power User
    Join Date
    Jul 2005
    Location
    My Home
    Posts
    3,169

    Default

    Quote Originally Posted by jdong
    What are the variables involved though? That's what I'm interested in listing. I want to see how much of that is "testing artifacts" and how much is "your detector tends to alert inconsistently to X kind of encounter"
    A couple variables:

    1. You only get reflected and bounced signals
    2. Radar detector in a moving vehicle
    3. A slight deviation of the path you drove previously can make a big difference on each run. A foot of difference can cause a miss.
    4. Always happens in real world test even with the same set up.
    5. Sensitivity of the radar detector.
    6. Frequency of the radar gun - some RD's don't pick up signal as good as the other frequency like 35.5 vs 34.7.

  8. #8
    Power User
    Join Date
    Feb 2006
    Location
    MI / MA
    Posts
    3,594

    Default

    I have trouble accepting that a "slight path" change can change a 2500-ft detect to a no-detect. Were the units with the ND's not run through that test run? The average seems to conveniently ignore the blank readings on the Belscorts and V1's.

  9. #9
    Power User
    Join Date
    Jul 2005
    Location
    My Home
    Posts
    3,169

    Default

    Quote Originally Posted by jdong
    I have trouble accepting that a "slight path" change can change a 2500-ft detect to a no-detect. Were the units with the ND's not run through that test run? The average seems to conveniently ignore the blank readings on the Belscorts and V1's.
    Case and point. Tested this scenario in stationary mode with the radar detecting signal continuous at 10 seconds at the weakest strength. Moving it 1 foot from the original position causes the radar detector to miss the signal - depends on how the receives the bounce or reflected signal. You also have to consider sensitivity of the radar detector. You also have to note that you are dealing with not only one variable that i mentioned say the "1 foot" but a couple of variables at the same time.

    It's a fact that real world scenarios are hard to duplicate even with the same set up and I believe the GOL's guy's agree with that. That is why you get averaging results.

  10. #10
    Power User
    Join Date
    Feb 2006
    Location
    MI / MA
    Posts
    3,594

    Default

    Ok, time and time again I prove to myself I cannot read italicized fonts. The 3rd test run was only done for detectors who showed inconsistent results for the first two. Sorry for accusing the top-X detectors for missing encounters ;-)


    Now, it's just the Cobra -- why did one test run get 1/3 of the distance? (I would think it's a software fault of the filtering algorithm, not test condition approach)

 

 

Similar Threads

  1. Ka band variation
    By verymanywelps in forum Radar Detectors - General
    Replies: 3
    Last Post: 11-24-2011, 06:17 PM
  2. Same model sensitivity variation
    By Public Enemy No 1 in forum Detector & Counter Measure Testing and Reviews
    Replies: 0
    Last Post: 07-10-2006, 02:54 AM
  3. NYC: New Variation on an Old Trap
    By speedneed2 in forum Local & Regional Info
    Replies: 1
    Last Post: 07-05-2006, 10:53 PM
  4. Consistency of Veil?
    By brick in forum Laser Veil Stealth Coating
    Replies: 8
    Last Post: 03-06-2005, 03:52 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •