[SystemSafety] ISO/IEC DTR 5469 out for review
Peter Bernard Ladkin
ladkin at rvs.uni-bielefeld.de
Thu Jun 30 15:04:32 CEST 2022
The ISO/IEC tech report on AI and functional safety has been distributed for review. People not
plugged in to their national committees (NC) might like to contact someone who is, to get a copy and
comment on it to their NC. Comment period runs through August.
People might recall my preprint of April 2021 in which I argued that the 5469 authors had got the
wrong end of the stick.
I don't remember whether I also conveyed this:
* A Cambridge group around Ross Anderson has shown that you can get behaviourally different DLNNs
from the same training data, by reordering the training data;
* A team of researchers from Google (including various academic wise owls) have shown than most
DLNNs (or all of them; I don't know) are highly underspecified. That is obviously consist with the
Cambridge results.
Senior people in AI think it quite possible, on the basis of these and other results, that deep
learning "does not work". It was suggested to me that reinforcement learning shows more promise.
PBL
Prof. i.R. Dr. Peter Bernard Ladkin, Bielefeld, Germany
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
More information about the systemsafety
mailing list