Detector bugs


Advanced search

Message boards : Science : Detector bugs

Author Message
phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1030 - Posted: 28 Apr 2012 | 21:55:22 UTC

First of all I appreciate your continous effort to build an open sensor network. Please continue your work.

However, there are serious limitations of your sensor, based on your design considerations. I have done just a few hobby avr projects and browsed through your source code. I found a severe bug:

Every time you detect an event an interrupt is triggered. In the interrupt routine you call the buzzer for a beep, after a 5 milliseconds delay the functions return to the interrupt routine again where the status flag is cleared. So accidently you introduce a 5 milliseconds dead time (much larger as the 200 mikroseconds from the tube), where no events are detected. This limits your device on high radiation measurement, above 20cps really noticeable and for eg 100cps the real rate is two times the displayed one.

Second as other people complained about not working usb ports: Your firmware makes a request for only 50 mA power from the usb port. Most of mainboards to not care about this and deliver power as demanded but some have extra protection circuits (like my nforce board) that limit the current to the requested value. I doubt the detector works with buzzer and backlight on and heavy counts.

Profile Ascholten
Send message
Joined: 17 Sep 11
Posts: 112
Credit: 525,421
RAC: 0

Message 1032 - Posted: 28 Apr 2012 | 23:42:55 UTC - in response to Message 1030.

Thank you for pointing out the issues with the detector. It is through member feedback that the project grows and the devices are debugged and become more useful.

FWIW, at 20 cps or 100 cps, what is the dosage of radiation being received at that rate. Is this a dose that is realistically going to be seen? If so, how long before the person there is cooked?

The maximum dose 'allowed' is 20 mSv/yr. 20 milli sievert a year.

I am assuming one 'click' = .01 uSv/h

20 CPS assuming you mean 'clicks per second', If I am wrong correct me please.
That would be 12 uSv/min 720 uSv/hr..... Roughly 1 mSv/ per hour and a half.

1.25 days for your annual dosage.

If CPS is for Curies Per Second, again, big dose coming at you. The detector I believe was meant to be more of a background detector and if you are really in a high rad area, it becomes much more critical to know the amount for your health, so you might want to invest in a more accurate device.

If your detector sounds like the buzzer on your dryer, either way, accuracy not withstanding, you are in trouble.

Aaron
____________


phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1034 - Posted: 29 Apr 2012 | 0:36:39 UTC

I respect your input but encourage you to study the source code prior a reply.
With the implanted firmware function the detector is displaying for 20 counts per second a value of 7 micro Sieverts per hour. This value is easily achiveable with a radioactive sample.

However, measuring something with a geiger counter is all about statistics. Radioactive decay is THE classic example of a poisson process. There is no measuring trick to betray physics. The standard error in your measurements scales with 1/square root (N) where N is your number of detected counts. So to get a accurate reading with a deviation of lets say 5% you need at least 400 counts. Thats why all this fancy graphics in your signature are pretty useless for a background monitoring.

Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1035 - Posted: 29 Apr 2012 | 1:46:09 UTC - in response to Message 1034.
Last modified: 29 Apr 2012 | 2:26:41 UTC

Thats why all this fancy graphics in your signature are pretty useless for a background monitoring.


We have heard that before. Some of us didn't believe it then, some of us won't believe it now. I believe.

@ the R@H developers:

Can the problem be fixed with a firmware update? If so, is there a bootloader in the ATtiny or are we going to have to purchase a programmer or send our ATtinys to someone who has a programmer.
____________

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1036 - Posted: 29 Apr 2012 | 11:16:40 UTC - in response to Message 1030.


Every time you detect an event an interrupt is triggered. In the interrupt routine you call the buzzer for a beep, after a 5 milliseconds delay the functions return to the interrupt routine again where the status flag is cleared. So accidently you introduce a 5 milliseconds dead time (much larger as the 200 mikroseconds from the tube), where no events are detected.


Are you sure about this ?
The v2 prototype (early version of the first batch) was tested up to 50kHz, that's 10 times more pulses than the tube can produce (in theory).
Even far above 50kHz the detector itself worked, however as the test frequency was approaching 30-40kHz the USB transfer became problematic.

Szopler
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 16 Apr 11
Posts: 139
Credit: 400,030
RAC: 0

Message 1038 - Posted: 29 Apr 2012 | 11:29:02 UTC
Last modified: 29 Apr 2012 | 11:29:26 UTC

We do not use an interrupt, but counter. Buzzer is turned on only when value of the counter changes. As TJM writes detector was tested with square wave generator to few kHz and it works well.

phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1039 - Posted: 29 Apr 2012 | 11:30:33 UTC - in response to Message 1036.

I cant't test it as I do not own a detector yet (The sale was over before I found this project). But from your posted firmware

The interrupt routine


ISR(INT1_vect)
{
counter++;
if (beep_on) beep(5);
}


your beep function

inline void beep(int t)
{
sbi(PORTD, BUZZER);
_delay_ms(t);
cbi(PORTD, BUZZER);
}


You see it?

I think one of the developers was aware of it, because you call the beep_if_change() function every now and then. That is useless, as you beep in the interrupt routine everytime.

What was the design consideration to limit the USB current to 50mA?

Szopler
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 16 Apr 11
Posts: 139
Credit: 400,030
RAC: 0

Message 1041 - Posted: 29 Apr 2012 | 12:07:27 UTC - in response to Message 1039.
Last modified: 29 Apr 2012 | 12:30:22 UTC

I've just checked source code I have at my computer and there is

ISR(INT1_vect)
{
counter++;
// if (beep_on) beep(5);
}


So it should be OK, because 100% of a first batch of detectors I am had done (If we can, we will verify a hex from ATTiny and this at my HDD).

About the USB...
We don't know why USB connection hangs. But the 50mA limitation shouldn't be a problem...

BUT I'm not sure!
There is a question:
Are there any problems with USB connection (it hangs after few minutes or hours of normal operation) when buzzer (and even LCD backlight) is turned off?

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1042 - Posted: 29 Apr 2012 | 12:32:47 UTC
Last modified: 29 Apr 2012 | 13:07:41 UTC

The 5ms dead time would limit the max working freq to 200Hz in ideal conditions.

200Hz = 12000 pulses/min.

Divide that by approx. 171 to get reading in uSv/h (the equation is a bit more complex, but for test this is enough). It's ~70, so in theory the max reading should be 70µSv. Correct me if I'm wrong.

Now let's see. I took my sensor (it's 1.0 proto board modded to 2.5 specs with a 2.51 firmware) and connected it to TTL generator. Here is the output, sorry for the bad quality, I took a quick photo with my phone:



The reading is close to 60000 uSv/h and as can be seen on the photo, buzzer is on (however it's replaced with blue led).

The sensor is actually capable of reading over 10 times more however I've just noticed the display code is bugged and the first digit is not shown.

And here is the stock v2.01 (with external power supply), board from the first batch:



It works up to tens of kilohertzs, however it displays "out of range" above certain value (not sure what's the max). This is a bug though, should display "If you can see this, run like hell".

phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1043 - Posted: 29 Apr 2012 | 13:08:02 UTC - in response to Message 1041.

Thank you for your quick experiment. Do you know the frequency of your waveform generator? So you are using firmware v2.51 on a 1.0 platform, interesting. What are the exact differences between sensors version 1, 2, 2.5, 3 and how many subversions are there?

The source code archive is called "RadAc_V2.5.zip" so I assume this is the latest firmware, but who knowes? The beep function is used in the "v2 20MHz" folder. There are no v1 or v2.5 folders.

For a dead time correction normally you apply a correction factor of g =
1/ (1 − countrate · t_dead). (be sure to insert right units)

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1044 - Posted: 29 Apr 2012 | 13:46:58 UTC - in response to Message 1043.

Thank you for your quick experiment. Do you know the frequency of your waveform generator?


This can be roughly estimated using the shown values, I set random frequencies just for tests. Few months ago I ran another test, where the sensor's pulse input and the frequency meter (in pulse counter mode) were connected to the generator, I compared the readings after long periods of time, the sensor didn't miss a single pulse at frequency higher than theoretical maximum of what the tube can read. I think it started to miss single pulses every now and then around 30-35kHz.


So you are using firmware v2.51 on a 1.0 platform, interesting.

Not really, the board is physically 1.0 however it's patched to v2.5 specs - lots of cut pathes replaced with wires.
Any v2 software won't work with v1 due to huge design changes between those boards.


What are the exact differences between sensors version 1, 2, 2.5, 3 and how many subversions are there?


v1 was the first batch, only a couple were made, it's no longer in use (except maybe 1 detector), it had a couple of major design flaws in both hardware and firmware (as the bug you mentioned above, which was found the hard way - at Chernobyl as far as I remember). It was used for initial tests and lots of things were changed as the result (including operating frequency, CPU (2313->4313) and USB wiring.

v2 was the first batch sent, the hardware between v2.5 and v2 is mostly the same, except the 2.5 had some (mostly) minor glitches fixed and a couple of new features introduced.

v3 is completely different detector, it's at prototype stage.


The source code archive is called "RadAc_V2.5.zip" so I assume this is the latest firmware, but who knowes? The beep function is used in the "v2 20MHz" folder. There are no v1 or v2.5 folders.


I'm not sure what's in the package, perhaps someone put the older sources there by mistake. It's also possible that beep_on was actually undefined and therefore the old function was disabled.

phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1045 - Posted: 29 Apr 2012 | 14:23:02 UTC - in response to Message 1044.


v2 was the first batch sent, the hardware between v2.5 and v2 is mostly the same, except the 2.5 had some (mostly) minor glitches fixed and a couple of new features introduced.


So v2.01 is the detector from the first patch. But what is the difference between v2.01 and the v2.5 branch exactly? What features are introduced between v2.51, v2.52 and v2.53?

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1047 - Posted: 29 Apr 2012 | 14:48:50 UTC - in response to Message 1045.

I don't have a full list of changes here.
As I recall between v2 and v2.5 USB wiring was changed (D+ and D- were swapped), also some of the CPU ports were swapped for better performance.
There were also changes in the high voltage supply for more reliable operation.
On 2.5 the H/V supply cannot be switched off via software anymore.
The USB code was changed a bit to reinitialize the port if the CPU restarts (earlier the pullup resistor was wired directly to +5V; however I'm not sure if this change was implemented on the 2.5 board).

There are no other hardware versions. 2.51 and 2.52 are different firmwares on v2.5 board. 2.52 has 3 display modes and possibility to auto-enable buzzer when the radiation exceeds 0.3µSv/h.


phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1049 - Posted: 29 Apr 2012 | 15:57:18 UTC - in response to Message 1041.


About the USB...
We don't know why USB connection hangs. But the 50mA limitation shouldn't be a problem...


I think your device hangs because of the watchdog timer. You never clear it before initialization. The watchdog flag is saved after reset, but the timer is reset to 16ms. So when there is a warm reset you are probably in an endless loop.
http://www.nongnu.org/avr-libc/user-manual/group__avr__watchdog.html

The backlight and other devices are switched on before USB initialization. So you drain the current before you request it from the operating system. There are no current limiting resistors in your MOSFET driver. It's just limited by the characteristic curve of the transistor. 50mA are maybe drawn from the backlight alone.

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1050 - Posted: 29 Apr 2012 | 16:05:19 UTC - in response to Message 1049.
Last modified: 29 Apr 2012 | 16:12:44 UTC

Actually, the LCD itself with backlight on draws less than 10mA.
The sensor with backlight on and H/V supply off requires less than 30mA.
Not sure about the H/V supply because none of my sensors use the default, both are powered by external 5V/400V PSU boards.

The main problem with the sensor "hanging" is the fact that it does not monitor the state of connection.
If the CPU reinitialises, the pullup is temporarily disconnected and the USB connection is lost, the host can see it and eventually reconnects.
However from time to time the connection stalls for some reason yet the USB host does not notice, the device is still displayed on the USB devices list, but it's not responding.

phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1051 - Posted: 29 Apr 2012 | 17:52:48 UTC
Last modified: 29 Apr 2012 | 17:55:41 UTC

I don't want to blame anyone, as this is a nice hobby project and a wonderful public science experiment. But I don't want to take your homework either:

Your displays specs:
LCD-AC-1602E-DLA A/KK-E12 C PBF
http://sklep.avt.pl/p/pl/486756/lcd+2x16+lcd-ac-1602e-dla+akk-e12+c+pbf+blackline.html ...20mA (they call it "eco")

buzzer http://www.tme.eu/html/EN/electromagnetic-sounders-_12mm-with-generator/ramka_1024_EN_pelny.html ....30mA

so 50mA already without the ATTiny4313, the DC-DC converter, iron losses of the trafo and the Opamp.

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1052 - Posted: 29 Apr 2012 | 18:13:01 UTC - in response to Message 1051.

The buzzer uses 30mA only when it beeps, thats like 1s total every 5 minutes during normal operation, if I ever hear continuous beep, the last thing I'll worry about will be the USB power consumption.

The LCD might have written 20mA on it, yet it never consumes that much. My v2.01 uses around 30mA with vcc=5,25V. Additional 3,5mA is required for the H/V supply to operate during background radiation levels (keep in mind that the default onboard supply is disabled on both of my sensors).



phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1058 - Posted: 30 Apr 2012 | 15:16:55 UTC

A quick survey on the device hangs up / USB connection gets lost problem:

How long did you run the device?
What is displayed on your device?
Is the backlight on? (even if you switched it off before)
Is the text and backlight static or flashing?
Is the device displayed in windows device manager?
What is the firmware version of your device? (you get it from device manager or stderr.txt)

Szopler
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 16 Apr 11
Posts: 139
Credit: 400,030
RAC: 0

Message 1059 - Posted: 30 Apr 2012 | 16:15:11 UTC - in response to Message 1058.
Last modified: 1 May 2012 | 15:53:35 UTC

A quick survey on the device hangs up / USB connection gets lost problem:

1. How long did you run the device?
2. What is displayed on your device?
3. Is the backlight on? (even if you switched it off before)
4. Is the text and backlight static or flashing?
5. Is the device displayed in windows device manager?
6. What is the firmware version of your device? (you get it from device manager or stderr.txt)


1. random
2. lcd values changes like at normal operation
3. no
4. backlight - static, text - like at normal operation
5. yes
6. no matter

Only USB connection hangs...

phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1061 - Posted: 30 Apr 2012 | 19:31:33 UTC
Last modified: 30 Apr 2012 | 19:31:52 UTC

Thank you!
Other users might want to share their experience?

Profile Ascholten
Send message
Joined: 17 Sep 11
Posts: 112
Credit: 525,421
RAC: 0

Message 1062 - Posted: 30 Apr 2012 | 20:15:54 UTC - in response to Message 1061.

I have my backlight on all the time.

The detector works like normal, it's just that it isn't sending the data to boinc, when you check the task, you can see that it has hours elapsed and no data to it.

Most of the time it appears to hang at zero percent, but I have seen on one or two occasion where it hung in the middle of a task, like at 26 percent or something.

Aaron
____________


exsafs
Avatar
Send message
Joined: 25 Jun 11
Posts: 14
Credit: 5,359
RAC: 0
Message 1063 - Posted: 30 Apr 2012 | 23:50:43 UTC - in response to Message 1062.

Just want to share my observations in a real radiation field.
I have had the opportunity to test your device in a certified calibration laboratory for radiation detectors. There, they use Cs-137 and Co-60 sources from 70 MBq to 70 TBq to generate radiation fields between 10 µSv/h up to 10 Sv/h under well-defined geometries. With an optical camera you get the readings from the detector into the control room.

One big problem during the tests was - above 100µSv/h the LCD display was too dark to see any reading. At 10 µSv/h there was no problem to visually inspect it. My suspicion was the buzzer, which probably should be turned off at higher dose rates, because its simply needs too much juice.

The main outcome of the test: for Cs-137 (gamma-lines at around 600 keV) the detector reads 40% less than it should. At 1200 keV (mean) gamma of Co-60 it is only 20% less. Measured for dose rates between 1 and 100 µSv/h.

I have done some more testing with radioactive sources in the lab, i will post some pictures later. I will try to do a comparison test with an radioactive source (buzzer on vs. buzzer off)

Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1064 - Posted: 1 May 2012 | 7:53:13 UTC

My detector never hangs, the buzzer is always off, the backlight is always on. I asked the manufacturer of my motherboard, it does not restrict USB devices to the current they request.

@Aaron, the script I published monitors the R@H task. If the task hangs the script attempts to reset the USB port. If you try the script and it gets the task running again then that would be confirmation of sorts. I run the script but my R@H task never hangs so I can't tell if reseting the port helps or not.

@exsafs, good to hear from you!
____________

Szopler
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 16 Apr 11
Posts: 139
Credit: 400,030
RAC: 0

Message 1066 - Posted: 1 May 2012 | 16:05:44 UTC
Last modified: 1 May 2012 | 16:31:02 UTC

exsafs - Big thanks! It's great to know something like this.
But I see that we should change the equation... :/

Can you do tests when only the geiger tube is inside the measurement room / box?
I think that problem with LCD readings is caused by radiation not a current consumption or something is wrong with your LCD (we had one that was broken in some way - was really hot and consume randomly 100-200mA).
We had tests (first version of detector) inside the lifter in Pripyat ;) with tube at a long wires and when readings was about 300µSv/h everything was OK with rest of the detector.


Back! There is something I just forget!
If you have V2 detector (V2.01) there is 10 ohm resistor in series at supply line. You can try to short it by wire and then check the HV supply voltage. If it is about 400V the detector should work well without this resistor.

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1067 - Posted: 2 May 2012 | 8:02:20 UTC

The problem with LCD brightness is mostly caused by contrast voltage, which is set by voltage divider connected directly to +5V supply. Unfortunately the LCD display is very sensitive to slighest changes, as a result minor change on +5V changes also the contrast.

____________

Alessandro Freda
Send message
Joined: 17 Aug 11
Posts: 38
Credit: 1,731,476
RAC: 110

Message 1068 - Posted: 2 May 2012 | 15:24:01 UTC - in response to Message 1058.

A quick survey on the device hangs up / USB connection gets lost problem:

1. How long did you run the device?
2. What is displayed on your device?
3. Is the backlight on? (even if you switched it off before)
4. Is the text and backlight static or flashing?
5. Is the device displayed in windows device manager?
6. What is the firmware version of your device? (you get it from device manager or stderr.txt)


1. 24/7
2. lcd values changes like at normal operation
3. yes
4. backlight - static, text - like at normal operation
5. have not checked
6. 2.51 (from http://radioactiveathome.org/boinc/test123a.php)

this refer to only 1 event using sensor on a very old PIII and unused USB port that has problems in the past with optical mouse and others devices, now works fine on a new Q9400

Profile Ascholten
Send message
Joined: 17 Sep 11
Posts: 112
Credit: 525,421
RAC: 0

Message 1069 - Posted: 2 May 2012 | 21:36:48 UTC - in response to Message 1068.

My hangs tend to be on an older pentium class machine running xp. ehh. probably 7 or so years old. Beat to hell old laptop. The others rarely hang if at all.
Dagorath I will try to get the supporting software installed and give your script a shot this weekend and report back how it works. That computer hangs pretty reliably every two or so days so we should get a good test of your script there hopefully.

I do want to thank you for taking the time to do the script to help the project out.

Aaron
____________


phys
Avatar
Send message
Joined: 28 Apr 12
Posts: 24
Credit: 0
RAC: 0
Message 1070 - Posted: 2 May 2012 | 22:25:45 UTC
Last modified: 2 May 2012 | 22:45:45 UTC

So let's combine the things.

First I loaded the firmware into the Simulator and checked the routines. I found out that as Szopler pointed out pulse counting is done with hardware counter 1. However, the mentioned interrupt routine is still there and actually serves no purpose. I think in the hardware development INT1 was onetimes used for software counting but then forgotten. Now INT1 is connected with the USB D- port and USB uses interrupt INT0 (if D+ and D- are not switched again). So it's no serious bug, it might interfere with USB under rare conditions or not but the routine should be cleaned up from the source code.

Next there is the display dimming. As TJM pointed out the display contrast is controled with a resistor network, so a certain voltage is applied to the display controller. He said allready it is sensitive to the supply voltage, so why is the supply voltage changing? Because of the current spikes! And what is the biggest power consumer ... well maybe the buzzer, for a very short period of time after the detection maybe the HV curcuit. Buzzer of is a good idea.

Third there are this USB hangups. From the survey no one was reporting a rebooting device, the error is assumed on the PC side like an error the PC is sending data to a different USB adress and the device is not aware of the change. VUSB is recommonding the monitoring by implementing suspend mode on the wiki page. Then the device is aware of the USB state and eventually reconnect. The USB signaling is done on a different 3.7 voltage, zener diodes are limiting the voltage on the device side. If the supply voltage drops under this range during usb communication transmission errors occur.

Last there are the SMB-20 tube specs. It is calibrated against Co60 and from the datasheet you get a conversion factor of ~150 to uSv/h. But normally you are interested on the fallout from accidents and so you calibrate it against Cs133 (that has another mean gamma energy and slightly other crosssection for detection), one guy found the conversion factor of ~171. So why is your reading so much off? Dead time correction for the tube may give just 5% but is done in software easily and no other bugs are found in the software. So is the tube not so sensitive?

Actually you use old STS-5 tubes, a predecessor for the SMB-20. I did not find a datasheet. Are there any users around that are able to compare those two tubes? Then you can change the conversion factor accordingly if there are any differences (or they do not have the same voltage etc....). In the firmware the conversion is done by counting all impulses in 35 second and divide it by 100 (that is exactly 171). But why is the boinc client counting in a different timespan of ~40s? (Besides I allready mentioned it is more scientific to use a much higher sampling time for small dosis and a fast alarm mode if you leave the confidence intervall of the background)

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1071 - Posted: 3 May 2012 | 9:57:44 UTC - in response to Message 1070.
Last modified: 3 May 2012 | 11:12:18 UTC


Actually you use old STS-5 tubes, a predecessor for the SMB-20. I did not find a datasheet. Are there any users around that are able to compare those two tubes?


I tested both the STS-5 and SBM-20 with (weak) radiation sources, the readings do not differ more than 2-3%. One of the russian sites (can't remember which, I'd have to google around a bit) states that these tubes have the same characteristics.


But why is the boinc client counting in a different timespan of ~40s? (Besides I allready mentioned it is more scientific to use a much higher sampling time for small dosis and a fast alarm mode if you leave the confidence intervall of the background)


The sensor returns raw data (pulse count and it's internal timer) so the actual sample time does not matter. I'm not a fan of short sample times myself, as it just fills the database with tons of records which do not represent anything anyway, just make parsing the data harder.

The /171.2 divider is for a full minute and the actual sample time (which differs sligthly between samples) is taken into account as well. The 171.2 comes from converting our older (more complicated) formula to simplier form, whether the formula was 100% accurate I have no idea.
Even if all the formulas are bugged, it does no permanent harm, as the DB stores raw values.



Next there is the display dimming. As TJM pointed out the display contrast is controled with a resistor network, so a certain voltage is applied to the display controller. He said allready it is sensitive to the supply voltage, so why is the supply voltage changing? Because of the current spikes! And what is the biggest power consumer ... well maybe the buzzer, for a very short period of time after the detection maybe the HV curcuit. Buzzer of is a good idea.


Well, lets face it - unless the host has very sh!tty USB, there is no way that device consuming even 100-150mA can actually cause voltage drops.
Mostly the 2.01 sensor is affected by this dimming effect, due to the unfortunate 10R resistor.

Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1072 - Posted: 3 May 2012 | 18:19:05 UTC - in response to Message 1071.

The sensor returns raw data (pulse count and it's internal timer) so the actual sample time does not matter. I'm not a fan of short sample times myself, as it just fills the database with tons of records which do not represent anything anyway, just make parsing the data harder.


There was another poster who seemed to understand the complexity of gathering meaningful radiation samples who mentioned the short sample times are a bad idea. I don't remember the poster's name and can't find the thread he posted to but I think probably we all recall his advice. Which was promptly ignored.

I've discussed the problem with a Ph.D. in math from our local university and he agrees that it is a classic Poisson process and that a sample time of 400 seconds is required for meaningful samples. Maybe the problem with too short sample times can be fixed even if R@H refuses to fix it. The raw data is available using the easy to use method described in this thread.

@phys,

Would it make sense to take the raw data from a given detector, add the pulse count and sample time from successive samples until the total pulse count reaches 400 then store that total along with the total sample time as a single sample in a database independent of the R@H database? Then the next 20 or so samples would be merged into 1 sample, repeat for all the raw data and for each detector (312 detectors reporting now!). From that database meaningful graphs could be plotted, etc. (Long term funding for the database is already in place, website established)

____________

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1073 - Posted: 3 May 2012 | 18:38:26 UTC - in response to Message 1072.

Actually there is no need to recalculate anything from the DB, as the samples can be scaled up easily - just take a number of subsequent samples, sum their pulses and divide by the sum of sample_time - what you get is exactly the same result as if it was one longer sample.
There are no disadvantages from running short sample times other than storage space used and more data to process.

Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1074 - Posted: 3 May 2012 | 23:06:08 UTC - in response to Message 1073.

Yes, you could reduce the database to at least 1/10 its current size and provide some room for growth. Is there a disadvantage to that?

There is another disadvantage... the project demonstrates that it thinks sloppiness is OK. That does not instil confidence in the community you are trying to reach.

What disadvantage is there to altering a few constants in the science application, recompiling it and collecting meaningful samples?

What disadvantage is there to admitting a mistake was made and correcting it? What advantage is there to pretending there was no mistake?

Well intentioned but naive people will use your meaningless raw data and not realise it needs to be corrected. You could avoid that by providing good data. No, putting up warning notices won't always work. Good data will always work.


____________

Szopler
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 16 Apr 11
Posts: 139
Credit: 400,030
RAC: 0

Message 1075 - Posted: 4 May 2012 | 11:23:26 UTC
Last modified: 4 May 2012 | 11:24:52 UTC

There was another idea - to get every single pulse with his strict GPS position & time. Then if we do many of the detectors isolated from Earth coming gamma radiation we will have huge gamma rays eye! Every of the tube will acts as a pixel in digital camera. But computing power and storage need to this is HUGE and we will need more military GPSes ;)

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1076 - Posted: 4 May 2012 | 12:06:04 UTC - in response to Message 1074.

Yes, you could reduce the database to at least 1/10 its current size and provide some room for growth. Is there a disadvantage to that?

There is another disadvantage... the project demonstrates that it thinks sloppiness is OK. That does not instil confidence in the community you are trying to reach.

What disadvantage is there to altering a few constants in the science application, recompiling it and collecting meaningful samples?

What disadvantage is there to admitting a mistake was made and correcting it? What advantage is there to pretending there was no mistake?

Well intentioned but naive people will use your meaningless raw data and not realise it needs to be corrected. You could avoid that by providing good data. No, putting up warning notices won't always work. Good data will always work.



Short samples are not meaningless. They have one huge advantage over longer samples, which is data resolution.
With short samples it is *always* possible to get accurate readings over longer periods, it just requires specifying start/end timestamps + a little math (mysql does that with single query). Where's the problem ?
The selected period has a tolerance of sample_time on both ends, which might be useful when looking for radiation spikes.
Need smooth graphs ? Running average does the job, combined with some weighting can even react quickly for large changes.

I'm not saying that the sample_time won't be changed in the future, as the 40s is an overkill and it's mostly a leftover from one of the first apps.

Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1091 - Posted: 5 May 2012 | 1:42:38 UTC - in response to Message 1076.

Short samples are not meaningless. They have one huge advantage over longer samples, which is data resolution.


Data resolution in this case is an illusion and phys's lesson in the other thread demonstrates why that is so.

it just requires specifying start/end timestamps + a little math (mysql does that with single query).


Of course! But you don't make that clear anywhere on your website. Sure it's revealed in this thread but a year from now, after this thread sinks to the bottom of the thread list, that info becomes invisible. So will you make that advice more visible? I don't think any warning you put anywhere can possibly be as effective and reliable as simply providing statistically reliable raw data. Furthermore, statistically reliable raw data requires a fraction of the storage space you're using now and it would take you about 5 minutes to make the changes.

Where's the problem ?


I already explained that but I'm happy to explain one more time. You either get it then or you don't. The problem is naive people will use your raw data without correcting (summing) the meaningless low pulse counts and count period and arrive at erroneous conclusions.

If it required a huge effort on your part to avoid that then I could understand why you wouldn't want to correct it. However, it seems pretty obvious a correction would take only a few minutes of your time so I really have to question what you guys are doing and why you're doing it. If your goal is to provide white noise then you've succeeded. If your goal is to provide random numbers then you have probably succeeded at that too (though the randomness is arguable). However, if your goal is to provide meaningful data from the detector network then you have failed. Indeed that is my opinion but if you seek expert guidance on the matter they will confirm my opinion.

I'm not saying that the sample_time won't be changed in the future, as the 40s is an overkill and it's mostly a leftover from one of the first apps.


That's a good sign but you should use pulse count rather than an arbitrary time to determine the sample duration. In other words, read the pulse counter on the detector as often as you want, 4s, 40s, 400s, it doesn't matter. What does matter is that you accumulate the pulse count until you have at least 400 pulses. If that takes 2s or 20000s it doesn't matter. Phys and the math professor I talked to at our local university agree on the need for 400 pulses minimum. Please verify that through discussions with whatever math/physics experts you choose in your own locality.

____________

Profile krzyszp
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 16 Apr 11
Posts: 383
Credit: 787,492
RAC: 102

Message 1092 - Posted: 5 May 2012 | 2:20:39 UTC - in response to Message 1091.

Dagorath, I'm really confused...
40s is only important for displaying on detector LCD and this is period for sending data from detector controller. Database collect all readings from controller and all that readings are freely available.
You can calculate pulses on any period of time - this is not restricted on any way!
Database is published every hour (by cron). Map is created on demand, also TJM's script. If your detector is connected 24/7 than you can get info about all impulses in 24h period (or even longer)!
I know, that charts can be useless as they are just fancy graphics but everybody can build their own, everybody can analyse data on any way, including more deep calculations and analyses... But sometimes it require some more knowledge and I'm happy that people with knowledge starts to visit our site and write posts - we learn from you!

What we try doing is building base for any person who are interested in getting independent data...

I'm not scientist, my target (I'm talking about myself) is giving people RAW data and get opportunity to play with it and (maybe) create something better than we can...

As I see from your post, we should publish info about "how read data" on main web site - maybe this is solution and/or answer?
____________
Regards,
Krzysztof 'krzyszp' Piszczek
Android Radioactive@Home Map
Android Radioactive@Home Map - donated
My Workplace

Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1094 - Posted: 5 May 2012 | 3:12:18 UTC - in response to Message 1075.

There was another idea - to get every single pulse with his strict GPS position & time.


Counting single pulses serves no purpose. It would be a bad waste of resources and nothing more. You need a minimum of 400 pulse counts to have 1 reliable sample. Maybe you don't like that fact but it is fact of Poisson processes whether we like it or not.

Then if we do many of the detectors isolated from Earth coming gamma radiation we will have huge gamma rays eye!


Maybe I don't understand what you mean by "Earth coming gamma radiation" but I think if you isolate a detector from that source you effectively isolate it from all sources. Am I missing something?


Every of the tube will acts as a pixel in digital camera. But computing power and storage need to this is HUGE


I don't think you can afford the storage and I doubt volunteers will donate money for the storage when they learn that your idea just won't work.

and we will need more military GPSes ;)


According to http://en.wikipedia.org/wiki/GPS, civilian GPS units are just as accurate as military units because US President Bill Clinton passed a law in 2000 requiring the US Department of Defense to provide the same satellite signal to civilian receivers as is provided to military receivers. The only difference between civilian and military GPS is that civilain GPS receivers are not allowed to function above 18 km altitude and at speeds exceeding 515 metres per second which makes civilian GPS useless for guiding an ICBM or other weapons.

Civilian and military GPS is accurate to only 20 metres. You can mark your detector's position on the Google map with 5 metre accuracy or better. Since most detectors will be attached to a desktop computer and move very little, GPS won't provide much of an advantage and will in many cases be less accurate than simply pining your location on the map. A moving detector is a different matter, of course.

Another thing about GPS.... it requires the receiver to have line of sight to 4 or more satellites. If there are buildings or even trees between the receiver and the satellites it either won't work or it becomes inaccurate and unreliable. Land surveyors get around that problem to some extent by locating hubs in line of sight to the satellites and "triangulating" their portable survey instruments to the hubs but it requires very expensive instruments. In some locations they can't use GPS at all because the hubs cannot be located in line of sight to the satellites due to mountains or trees. Radiation detectors equipped with GPS will fail under those circumstances too.

____________

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1137 - Posted: 20 May 2012 | 13:55:52 UTC - in response to Message 1094.
Last modified: 20 May 2012 | 13:56:38 UTC

Well, I'd like to see where exactly did you find the info about 20m precision.
I got a couple of GPS receivers because I'm a runner and I use them to track my activities.
The worst one I had, when it calibrated the altitude (takes a while after cold boot), used to report <10m very often.
The newer model I use now under fairly clear (no need to worry about trees unless in deep forest) sky reports <3m when left stationary for a while. I didn't believe it at first, but it's actually easy to verify by checking the same spot(s) again, also geocachers could probably tell you a bit about this.
The same receiver has no problems running inside a building, however it takes some time to get position, as it averages the readings - single sample jumps wildly around the center point. Now tell me, what a 50m jump means while creating radiation map ? I'm not sure if you noticed, but even our data export has no such accuracy, as the last digits are removed to protect privacy (still, the map and test123 both report exact position I believe).

Profile Ascholten
Send message
Joined: 17 Sep 11
Posts: 112
Credit: 525,421
RAC: 0

Message 1139 - Posted: 20 May 2012 | 18:27:56 UTC - in response to Message 1137.

I got a GPS on my boat, just your standard that's accurate to a bout 2 to 3 M. Even my 5 year old handheld is good to 3M. Getting a signal inside a building depends on structure etc, however I am finding that the newer devices can lock on a LOT faster than the older ones. My old handheld used to take about a minute to wind up, the new one in maybe 20 seconds it's already locked and honing in on accuracy.

Aaron
____________


Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1141 - Posted: 20 May 2012 | 22:45:49 UTC

Sorry, I should have given a citation. I got the 20 meter number from this wikipedia article . See the 10th paragraph in the section titled History, partially quoted below.

Initially, the highest quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded (Selective Availability). This changed with President Bill Clinton ordering Selective Availability to be turned off at midnight May 1, 2000, improving the precision of civilian GPS from 100 meters (330 ft) to 20 meters (66 ft).


Further down in the article in the Accuracy Enhancement and Surveying section it speaks of 2 mm accuracy with use of the expensive kind of equipment surveyors use which I was aware of because my friend is a surveyor and owns that type of equipment. He also had a Garmin handheld unit back in the 90's and it was definitely limited to 100 meter accuracy. He used it for "getting close". I think the proper way to interpret the wiki article is to say 20 meter precision was probably what the average civilian could expect from affordable handheld units in 2000 though work was already underway at that time to improve precision. Sorry if I misinterpreted in my previous post on the topic.

Now tell me, what a 50m jump means while creating radiation map ?


I don't understand the question. Can you rephrase it please? I don't know what you mean by "a 50 meter jump".


I'm not sure if you noticed, but even our data export has no such accuracy, as the last digits are removed to protect privacy (still, the map and test123 both report exact position I believe).


No, I didn't notice that about the data export. I don't know what test123 is, please explain.

____________

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1149 - Posted: 31 May 2012 | 21:26:28 UTC - in response to Message 1141.

I thought everyone knows our first page which displayed some basic data: http://radioactiveathome.org/boinc/test123a.php :P
This one displays full locations (might be changed), while exported data is rounded, without last 2 digits AFAIR (doesn't really matter for data interpretation).
About the 50m jumps - I was reffering to GPS placed indoors. Some of the receivers will be able to get the position (well, obviously not in all buildings), but either they will provide rounded data or the position will jump wildly, yet it should be usable in both cases. Anyway I don't see a point using GPS with stationary sensor, as it has no advantages over entering position manually.


Tino Ruiz
Send message
Joined: 3 Nov 11
Posts: 2
Credit: 0
RAC: 0
Message 1214 - Posted: 16 Jul 2012 | 14:59:03 UTC

So...where do we stand? Is this project going to change its stance and allow meaningful background radiation noise measurements (according to Dagorath/Phys and others), or should I contribute money towards another radiation project that has a better understanding of radiation decay?

Profile TJM
Project administrator
Project developer
Project tester
Send message
Joined: 16 Apr 11
Posts: 291
Credit: 1,382,673
RAC: 45

Message 1215 - Posted: 16 Jul 2012 | 15:40:15 UTC - in response to Message 1214.

So are you saying, that our data is meaningless ?
The client/server both return continuous (if the sensor is running) raw data, which can be transformed in many ways with just a little math involved.

Tino Ruiz
Send message
Joined: 3 Nov 11
Posts: 2
Credit: 0
RAC: 0
Message 1218 - Posted: 18 Jul 2012 | 18:00:48 UTC - in response to Message 1215.

I'm not a radiation expert, but from what I've read so far from Phys/Dagorath and others, the way this project is set up, it does not allow meaningful background radiation measurements as the sensor does not take reliable samples to make out the whole picture. I don't know how to try to explain it to you, as others have already put it more eloquently than I can.

The data is not meaningless if you have a different aim, like making a random password generator with input of the recorded data etc. The data IS meaningless *right now* if the aim is to measure background radiation, because it is NOT set up to accurately measure background radiation. That's my take on it anyway.

It's unfortunate that there isn't some middle-ground we can agree on and make this project better. It would be a shame to realize after many years that the project was just wasting resources instead of contributing something useful to the community.

But anyways, I will continue to watch and hope that this project will somehow be successful, whatever that mission may be. :-)

jacek
Send message
Joined: 6 Nov 12
Posts: 10
Credit: 0
RAC: 0
Message 1458 - Posted: 7 Nov 2012 | 15:03:30 UTC

Interesting thread. I find the comments from phys very constructive and the developers should use his expertise to improve the project. Were those bugs corrected in the software?

On the other hand, Dagorath's loud mouth is very offensive and I'm frankly surprised he was not banned form here. His comments are totally off mark and he scared some potential volunteers away (Tino Ruiz).

Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1465 - Posted: 8 Nov 2012 | 16:35:47 UTC - in response to Message 1458.

On the other hand, Dagorath's loud mouth is very offensive


You are offended too easily.

and I'm frankly surprised he was not banned form here.


Frankly, you are surprised too easily too.

His comments are totally off mark


Nonsense.

and he scared some potential volunteers away (Tino Ruiz).



Nonsense and exaggerations on your part. You said "volunteers" which means more than one yet you mention only one name. You purposely plant a suggestion of something much worse than the real situation. In other words you tried a sneaky lie. Nice try but it doesn't wash anywhere outside the USA where, as was noted in the London Times, they're stupid enough to vote Bush in twice.

Furthermore, Tino Ruiz's reason(s) for not joining are all about the way the data is analyzed/presented here and I really don't see how I can be blamed for that. I think you have a problem with speaking/hearing the truth and I predict that your loathing for the truth will drive you to say even more very foolish things on this matter.

____________

jacek
Send message
Joined: 6 Nov 12
Posts: 10
Credit: 0
RAC: 0
Message 1468 - Posted: 8 Nov 2012 | 22:11:05 UTC

I'm no going to tolerate being caller lier and being insulted as an American.

How much longer Dagorath is going to insult new members with moderators not intervening?

The rules clearly state:
"No messages intended to annoy or antagonize other people, or to hijack a thread.
No messages that are deliberately hostile or insulting.
No abusive comments involving race, religion, nationality, gender, class or sexuality."

My enthusiasm for this project is rapidly disappearing. I saw another active USA member who severed his ties with this project just because of this BS being allowed to happen.

Over and out.

Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1469 - Posted: 9 Nov 2012 | 14:50:19 UTC - in response to Message 1468.

Bingo! You insult me first then whine about me insulting you. ROFLMAO!

I think you and Tino Ruiz are the same person. I think you created the Tino entity just so he could tell the project he's not joining which would then allow the jacek entity to bring up this bogus claim that I am a loudmouth and the cause for volunteers not joining. Like I said, that kind of nonsense is de rigeur in the USA but the rest of the world just laughs at it and now they laugh at you. This project can't keep up with the demand for detectors from new participants and Tino is the only potential volunteer to have said he's not joining yet here you are screaming blue murder about me driving participants away by speaking the truth. You and your straw man argument are hilarious! Don't stop, please. We all want to see what your next stroke of brilliance will be.

____________

ShortlegCats
Avatar
Send message
Joined: 21 Dec 12
Posts: 38
Credit: 0
RAC: 0
Message 1532 - Posted: 21 Dec 2012 | 5:52:25 UTC

Hello, I'm new here trying to get up to speed quickly.

I assume the following question pertains to revision 2.xx of your usb device but correct me if I'm wrong:

Out of all the conversation about current requested from the USB port, what is the reason for choosing such a low value? Can this value not be increased for more of a "buffer" to eliminate the debate of the buzzer or the screen taking too much current at peak usage?

Is this "issue" irrelevant to current productions of the current 3.xx series?







Dagorath
Avatar
Send message
Joined: 4 Jul 11
Posts: 151
Credit: 42,738
RAC: 0

Message 1545 - Posted: 24 Dec 2012 | 6:07:57 UTC - in response to Message 1532.

USB is a complicated specification and I am far from being an expert on it so I could be wrong but since nobody else seems willing to take a crack at the answer I'll give it a try on the condition that if you quote me I'll deny everything ;-)

USB is able to supply only small amounts of current, 500mA IIRC. If the sum of the currents required by all of the USB devices on any leg (say node) exceeds the limit then I believe that leg protects itself by shutting down. I think each device on any given leg must declare how much current it needs and if the sum of the declarations exceeds the limit then the last device is assumed to be the offender and is denied access, or something like that.

That concern is part of the USB limitation (it's intended primarily for comms not power supply) so it will be a concern for any detector that uses USB. I don't know why it was a potential issue for this project's device. Please refresh my memory if I seem to have forgotten or appear to be unaware of previous discussions/concerns.
____________

Post to thread

Message boards : Science : Detector bugs


Main page · Your account · Message boards


Copyright © 2024 BOINC@Poland | Open Science for the future