Any missing objects?


Advanced search

Message boards : Science : Any missing objects?

AuthorMessage
Artonibus Rex
Send message
Joined: Aug 13 10
Posts: 31
Credit: 2,888,776
RAC: 0
Message 111026 - Posted 10 Mar 2011 0:01:50 UTC

    I guess with the Arecibo search it would be interesting to know whether certain known objects failed to be re-detected within the search space. Was the search sensitivity sufficient to allow all known objects to be found?

    By analogy on GW, are there any anticipated test signals or patterns which were hoped to be found. Are there any astro events which would be expected to produce measurable anomaly? Can you create an artificial event that validates the detection scheme? Further has any type of dither signal been added into the signals on purpose to test whether discrete signal processing weeds out certain classes of signal?

    Profile Mike Hewson
    Forum moderator
    Avatar
    Send message
    Joined: Dec 1 05
    Posts: 3575
    Credit: 28,412,923
    RAC: 7,769
    Message 111046 - Posted 10 Mar 2011 22:45:56 UTC - in response to Message 111026.

      Last modified: 10 Mar 2011 23:40:55 UTC

      By analogy on GW, are there any anticipated test signals or patterns which were hoped to be found. Are there any astro events which would be expected to produce measurable anomaly? Can you create an artificial event that validates the detection scheme? Further has any type of dither signal been added into the signals on purpose to test whether discrete signal processing weeds out certain classes of signal?

      For continuous wave objects probably the pulsar in the Crab Nebula is the expected 'loudest' signal to gain detection sometime. Theory suggests it is tantalisingly close to the current lower bound of detection - when S5 and S6 were taking data anyway. Indeed one of the operational parameters used when the IFO's are up and running is an estimate called 'Crab Time', roughly meaning how long would the IFO have to take continuous data for in order that the Crab pulsar could be detected ( within some confidence/probability measure ). In that sense a lower Crab Time is good and a higher one worse. A sensitivity measure. The length of the data segments is relevant because the signal processing techniques allow any signal to 'rise out of the mist' of noise. One has to do this as the the noise level is way greater than the signal level. Unrelated ( non signal ) disturbances of the interferometers have no preference with regard to the astronomical source of interest ( at least that's a good assumption ). Like single waves at the seashore they don't tell you whether the tide is going in or out. You have to wait a while and see where the average wet mark/line on the sand is going. Over any given time period the tidal movement is much lower than the wave excursions, so longer observations give better comments about tidal trends.

      Because G ~ 10^[-11] is so small* then we humans can't shove around enough mass or change it's dynamics quick enough to get anywhere near a measurable signal. The interferometers routinely have hardware and software 'injections'. In the first instance by literally bumping the instrument via some relevant transducers, and in the second by data point additions to the record already obtained. You can see how both would test the validity of our understanding of how the interferometers work, and our methods of signal analysis. So if either deliberate injection didn't come out in the wash, so to speak, then we'd be worried.

      Cheers, Mike.

      * .... and c is so large. Or put another way, since we are using light to measure gravity then the relative strength of the coupling constants, some 40 orders of magnitude, rules our endeavors. This is why spacetime appears to be so 'stiff' and thus hard to either budge or measure the wiggles of.
      ____________
      "I have made this letter longer than usual, because I lack the time to make it short." - Blaise Pascal

      Profile Bernd Machenschalk
      Forum moderator
      Project administrator
      Project developer
      Avatar
      Send message
      Joined: Oct 15 04
      Posts: 3292
      Credit: 91,984,093
      RAC: 21,018
      Message 111049 - Posted 10 Mar 2011 23:40:37 UTC - in response to Message 111026.

        I guess with the Arecibo search it would be interesting to know whether certain known objects failed to be re-detected within the search space.


        Which objects we missed and why is currently being investigated, but results will take some time.

        BM

        astro-marwil
        Send message
        Joined: May 28 05
        Posts: 279
        Credit: 25,263,754
        RAC: 36,190
        Message 111058 - Posted 11 Mar 2011 22:48:11 UTC - in response to Message 111046.

          Hallo Mike !
          What has to be the minimal signal to noise ratio of the GW that one will speak of a ducumented evidence of confirmation? And how far are we currently away from this limit? Have there been some detections that could result from GW but didn´t reach this strong rule of confirmation? Widely used in science is a limit of 3 sigma, but there is stil an uncertainty of 0.3%.

          Kind regards
          Martin



          ____________

          Profile Mike Hewson
          Forum moderator
          Avatar
          Send message
          Joined: Dec 1 05
          Posts: 3575
          Credit: 28,412,923
          RAC: 7,769
          Message 111063 - Posted 12 Mar 2011 8:44:35 UTC - in response to Message 111058.

            Hallo Mike !
            What has to be the minimal signal to noise ratio of the GW that one will speak of a ducumented evidence of confirmation? And how far are we currently away from this limit? Have there been some detections that could result from GW but didn´t reach this strong rule of confirmation? Widely used in science is a limit of 3 sigma, but there is stil an uncertainty of 0.3%.

            Hello Martin! SNR = 20:1 or better which I think is way above 3 sigma. There's strong reasons for such stringency coming from the history of the subject when using resonant bars ( read : bunfights in the time of Joe Weber ). I think my estimate if S6 had gone better - long quiet periods without hardware issues - was even odds on the Crab, but that stood upon a host of assumptions ( the main one being my humble understandings ). I'm not in the right loop to speak of might-have-beens. In fact one of the lessons from the 70's as I understand matters, was to pre-agree on what the level of signal confidence should be. But there are other aspects/vetoes. Probably the main being ( near ) co-incidence of signals at separated detectors, hence the multi-continental sites.

            The trouble with using phrases like 3 sigma and 0.3% etc in this case is that we've never heard a gravitational wave, and so the assumptions underlying the probability distributions may not hold. Other areas of science have the luxury of some established understandings, so say if I am a counter of rabbits then I already have a rabbit prototype to compare with. One very possible GW result is that we hear nothing at all despite good equipment and prolonged data sets. This would not be a failure of the program at all, as like the Michelson-Morley experiment it would trigger some radical re-thinking.

            Cheers, Mike.
            ____________
            "I have made this letter longer than usual, because I lack the time to make it short." - Blaise Pascal

            Post to thread

            Message boards : Science : Any missing objects?


            Home · Your account · Message boards

            This material is based upon work supported by the National Science Foundation (NSF) under Grants PHY-1104902, PHY-1104617 and PHY-1105572 and by the Max Planck Gesellschaft (MPG). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the investigators and do not necessarily reflect the views of the NSF or the MPG.

            Copyright © 2014 Bruce Allen