So we found a tricky bug in the Dave's original NN code. The line had assumed that vectors of weights and inputs were of identical length when one was shorter - leading to iterating past the end.
So this was why problems kept coming and going again. My main thought after all this (very boring) debugging effort is how we identify and locate these type of errors more easily. Reading the code it was not obvious that the last weight was the output weight (at least at the location of the bug).
So in ones own code it helps to follow the principle of least astonishment - but in other peoples code? Is unit testing the only way? After all the confusing debugging this I'm inclined to say yes. I don't think static testing would have found this and valgrind didn't comment as the memory area was valid. I feel like I still have a lot to learn esp in the QA department!
So this was why problems kept coming and going again. My main thought after all this (very boring) debugging effort is how we identify and locate these type of errors more easily. Reading the code it was not obvious that the last weight was the output weight (at least at the location of the bug).
So in ones own code it helps to follow the principle of least astonishment - but in other peoples code? Is unit testing the only way? After all the confusing debugging this I'm inclined to say yes. I don't think static testing would have found this and valgrind didn't comment as the memory area was valid. I feel like I still have a lot to learn esp in the QA department!