[SystemSafety] AI and the virtuous test Oracle
Les Chambers
les at chambers.com.au
Thu Jun 22 09:45:40 CEST 2023
Phil
I agree that your approach is practical and necessary today.
But imagine tomorrow, when AI agents are replicated in thousands of critical
applications. The courts are likely to be swamped with this class of legal
case. We will need an AI agent to adjudicate.
Your solution is implemented to the right of bang, I am addressing
initiatives that can be taken to the left. Given safety and quality must be
engineered into systems we must now deal with engineering morality.
An off-list responder has posed two interesting questions:
Q: Whose virtues? whose morals? would the Oracle apply
Candidate Answer: The best we can do is seek out foundation human wisdom which,
to date, has stood outside of time. > 2000 years is good. I am a student of the
ancient stoic philosophers: Zeno, Seneca, Marcus Aurelius and Epictetus.
Various religious texts also contain wisdom validated over thousands of years.
Q: Will it be wrong for a LLM or AI agent to pass judgement on the morality of
a target system given the semi-subjective nature of virtues? Who decides how a
moral AI gets programmed?
Candidate Answer: I would hope that there are foundation morals that the whole
of humanity can agree upon. It could be that the Oracles concept of virtue
must reflect the society in which it operates. I've worked with Australians,
Americans, Chinese, Thais, Arabs, Brazilians, Mexicans, Germans, French,
Belgians, English and Indians. I think we all love our children, and agree we
should not kill them, or sell them into slavery.
Where there is disagreement within a society, the AI would have to throw an
exception and let humans decide. I can envision a time when system
requirements specifications have a chapter on community-specific, validatable
morality.
In any event, I believe that engineers should be trained to deal with these
wicked problems; to at least know the right questions to ask. Ergo philosophy
should be a foundation subject in all years of an engineering education. Such
courses would produce educated engineers as opposed to one-dimensional
technical automatons - technicians. Im in touch with people revamping the
postgraduate engineering management curriculum at one of my local universities.
I am pushing this idea very hard.
Interested in your thoughts on the subject.
Les
> Les,
>
> Since you welcome riffs, I have something that is not as
> all-encompassing, but might have more immediate application.
>
> I propose that to the degree that "AI" technology is deployed in a way
> that supplants practical human judgement, the manufacturer of that
> system (in some cases just the AI part if it is an add-on component)
> should be held accountable for any action (or inaction) that, if
> associated with the human that was supplanted, would have constituted
> negligence. This should include situations in which a human is put in
> an untenable situation of supervising an AI in a way that puts
> unreasonable demands upon them, amounting to a "moral crumple zone"
> approach (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236).
> Liability/negligence if an AI is in substantive control of such a
> situation should attach to the manufacturer.
>
> This leads to a more narrow oracle, but perhaps still useful, than you
> propose. If a loss event is caused by a lack of "reasonable" behavior by
> an AI, the manufacturer is on the hook for negligence, and the
> AI/manufacturer owes a duty of care the same as the human who was
> supplanted would have owed to whoever/whatever might be affected by that
> negligence. It has the advantage of reusing existing definitions of
> "reasonable person" that have been hammered out over decades of law. (To
> be sure that is not in the form of an engineering specification, but
> case law has a pretty robust set of precedents, such as crashing into
> something after your properly functioning vehicle ran a red light is
> likely to lead to the driver being found negligent.)
>
> This does not require the AI to behave the same as people, and is not a
> full recipe for "safe" AI. But it puts a floor on things in a way that
> is readily actionable using existing legal mechanisms and theories. If a
> reasonable person would have avoided a harm, any AI that fails to avoid
> the harm would be negligent.
>
> I've worked with a lawyer to propose this approach for automated
> vehicles, and it is starting to get some traction. What I write in this
> post (above) is a generalization of the concept beyond the narrow
> automated vehicle application.
> Details here:
> https://safeautonomy.blogspot.com/2023/05/a-liability-approach-for-
automated.html
>
> -- Phil
>
> On 6/21/2023 7:14 PM, Les Chambers wrote:
> > Hi All
> >
> > I find myself reflecting on what will become of us.
> > As systems engineering best practice is overrun by AI.
> >
> > Practitioners report that neural networks are eating code.
> > Example 1: The vector field surrounding a Tesla motor vehicle is an output
of a
> > neural network, not the result of software logic. Soon the neural net - not
> > code - will
> > generate controls. The size of the code base is reducing. (Elon
> > Musk)
> > Example 2: the ChatGPT transformer code base is only 2000 LOC (Mo Gawdat
> > https://youtu.be/bk-nQ7HF6k4)
> >
> > The intelligence resides in terabytes of data, perceptrons and millions of
> > weighting parameters. All are gathered by automated means. Not subject to
human
> > review.
> >
> > Ergo what will become of our trusty barriers to dangerous failure:
> > 1. Safety functions - gone
> > 2. Verification - gone
> > 3. Code reviews - gone
> > 4. Validation - How?
> >
> > On validation, may I suggest the moral AI. A test oracle built on a
virtuous
> > dataset, capable of interrogating the target system to determine virtue.
Test
> > outcomes will morph from pass/failure to moral/immoral.
> >
> > Credible industry players have predicted that soon we will have AIs orders
of
> > magnitude smarter than us. Especially when they start talking to each
other.
> > The bandwidth will be eye-watering - the increase in intelligence,
vertical.
> >
> > New barriers are required. Time to develop an AI that is on our side â
the side
> > of ethics and the moral life. An adult in the room if you like. We should
birth
> > this creature now and raise it as good parents.
> >
> > Let us not panic. May I put the proposition: virtue, like creativity, can
be
> > algorithmic.
> > I have a sense of starting from the beginning - tabula rasa. I suggest that
> > high-level thinking on the subject could begin with ChatGPT prompts:
> > 1. What is the stoic philosopherâs concept of virtue?
> > 2. What are the elements of philosophy relevant to AI?
> >
> > Let us not forget our engineering mission: Guardians of the divine Logos,
the
> > organizing principle of the universe, responsible for its creation,
> > maintenance, and order.
> >
> > Would anyone care to riff on this?
> >
> > Les
> >
> > --
> >
> > Les Chambers
> >
> > les at chambers.com.au
> > systemsengineeringblog.com
> >
> > +61 (0)412 648 992
> > _______________________________________________
> > The System Safety Mailing List
> > systemsafety at TechFak.Uni-Bielefeld.DE
> > Manage your subscription: https://lists.techfak.uni-
bielefeld.de/mailman/listinfo/systemsafety
> >
>
> --
> Prof. Phil Koopman koopman at cmu.edu
> (he/him) https://users.ece.cmu.edu/~koopman/
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
More information about the systemsafety
mailing list