[SystemSafety] AI and the virtuous test Oracle - action now!
Dr. Brendan Patrick Mahony
mahonybp at tpg.com.au
Sun Jul 23 01:17:53 CEST 2023
> On 27 Jun 2023, at 1:45 pm, Les Chambers <les at chambers.com.au> wrote:
>
> [….] cling
> to regulating processes that have ceased to exist - are likely to be overrun
> and made redundant.
>
> In favour of organisations such as:
>
> - The Center for Human-Compatible AI at UC Berkeley
> - The Future of Life Institute
> - The Center for AI Safety (CAIS)
> - Stanford Center for AI Safety
>
> My view is that this is not a steady-as-she-goes situation
Late to the party, but I am moved to raise a few points.
• The question of whether AI raises a safety/security threat is fundamentally about the nature of our environmental control systems.
They have many dangerous features with questionable functionality that are tolerable only under the presupposition of the good intentions of operators.
• My view of the state-or-art is that automated reasoning shines in two areas.
- Plan-space search - making automated systems hard to beat for tactical and even strategic “game" play
- Classification - making automated systems hard to beat for “insights” into large data sets
• The ML community is particularly “entrepreneurial” and doesn’t like being told to think before they leap. In particular, if [the] AI [community] is the problem, [the] AI [community] must be the solution.
Since I have not seen this discussed here, I’d like to raise a question that has recently been on my mind.
What do we think of the big push to authorise the deployment of massively complex critical systems through using automated [low] assurance techniques? The so called “continuous authority to operate” cATO.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20230723/71770f55/attachment.html>
More information about the systemsafety
mailing list