<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto">Those who create risks should be required to control them adequately and to provide evidence that they have done so effectively. AI doesn’t change that. <br><br><div dir="ltr">Regards<div><br></div><div>Martyn</div></div><div dir="ltr"><br><blockquote type="cite">On 27 Jun 2023, at 05:53, Steve Tockey <steve.tockey@construx.com> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div>Les,</div>
<div><br>
</div>
<div><i>“silicon is unbounded”</i></div>
<div><br>
</div>
<div>I disagree. Silicon <u><b><i>is</i></b></u> bounded, just in different ways than humans. For one, Turing Computability. Turing Machines, which all modern silicon computers are a kind of, are 100% deterministic. Human intelligence is non-deterministic.
Thus, I argue, human intelligence is not something that a computer could ever actually be capable of. See, for example:</div>
<div><br>
</div>
<div>https://youtu.be/i2trJEIFIvY</div>
<div><br>
</div>
<div>I agree that, “We are embarked; we need to deal with it”. But we also have to be aware of the limits of so-called AI.</div>
<div><br>
</div>
<div><br>
</div>
<div>— steve</div>
<div><br>
</div>
<div><br>
<div><br>
<div>On Jun 26, 2023, at 9:15 PM, Les Chambers <les@chambers.com.au> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<div>Hi<br>
<br>
I put it to you and the list in general, a "discuss later" mentality is foolish <br>
in the extreme. The process of deploying intelligence in automated systems has <br>
, and will, fundamentally change. Among other things, international bodies <br>
that currently regulate software-intensive Safety-Critical systems - who cling <br>
to regulating processes that have ceased to exist - are likely to be overrun <br>
and made redundant. <br>
<br>
In favour of organisations such as:<br>
<br>
- The Center for Human-Compatible AI at UC Berkeley<br>
- The Future of Life Institute<br>
- The Center for AI Safety (CAIS)<br>
- Stanford Center for AI Safety<br>
<br>
My view is that this is not a steady-as-she-goes situation. This is a major <br>
inflection point in the evolution of intelligence. Carbon hosts will always be <br>
limited; silicon is unbounded. We are embarked; we need to deal with it.<br>
<br>
Suggested reading: Max Tegmark, Life 3.0<br>
<br>
Les<br>
<br>
<blockquote type="cite">C2 - Confidential<br>
<br>
Hello,<br>
<br>
For the moment, I don't see in industry any attempt to realise E/EPE safety <br>
</blockquote>
related function with On-line AI. All what I see is focused on off-line AI, <br>
meaning that the training is done specified training data sets, and the <br>
validation is done on specified test datasets. We don't see any performance <br>
better than roughly 3.3 10-3 in automotive. So let's wait for the achievement <br>
of 10-8 to 10-5 error rate on test datasets and discusss later the <br>
acceptability.<br>
<blockquote type="cite"><br>
Attention : due to increased cybersecurity screening I may receive external <br>
</blockquote>
emails 30 minutes after their emission.<br>
<blockquote type="cite">RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert<br>
Program Support Manager<br>
Optronics and Defence Division/Customer Support <br>
Safran Electronics & Defense <br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64 <br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE <br>
www.safran-electronics-defense.com <br>
<br>
<blockquote type="cite">-----Message d'origine-----<br>
De : Les Chambers <les@chambers.com.au><br>
Envoyé : lundi 26 juin 2023 06:03<br>
� : RICQUE Bertrand (SAFRAN ELECTRONICS & DEFENSE)<br>
<bertrand.ricque@safrangroup.com>; les@chambers.com.au;<br>
koopman.cmu@gmail.com; systemsafety@lists.techfak.uni-bielefeld.de<br>
Objet : RE: [SystemSafety] AI and the virtuous test Oracle<br>
<br>
CAUTION: This message originated from an outside organization. In case of<br>
suspicion, click on "Report to SAFRAN Security" from the Outlook ribbon.<br>
<br>
RE your comment: â?oas it impossible, by construction, to identify all <br>
</blockquote>
</blockquote>
dangerous<br>
<blockquote type="cite">
<blockquote type="cite">situationsâ?<br>
<br>
True. But it is possible to identify the subset of highest probability<br>
â?odangerous situationsâ? and deal with them. Â I had 10 years of <br>
</blockquote>
</blockquote>
experience<br>
<blockquote type="cite">
<blockquote type="cite">with this issue in computer control of chemical processing reactors. In the<br>
1970s, we had a safety authority write software to: 1. identify well-<br>
established dangerous states of a reactor; 2. Transition reactor to a safe <br>
</blockquote>
</blockquote>
state<br>
<blockquote type="cite">
<blockquote type="cite">â?" usually shut down. It was called the â?oabortâ? programming. This
<br>
</blockquote>
</blockquote>
abort code<br>
<blockquote type="cite">
<blockquote type="cite">overrode the control software written by another team. By today's standards<br>
it was primitive but still very effective.<br>
This experience has influenced my thinking on ways and means of dealing<br>
with â?obadâ? or â?ohallucinatingâ? AIs. We need a separate and distinct <br>
</blockquote>
</blockquote>
â?oabortâ? AI<br>
<blockquote type="cite">
<blockquote type="cite">capable of recognising evil and dealing with it.<br>
Sal Khan (Khan Academy) has implemented another idea in creating his<br>
Khanmigo personal tutor. To improve its accuracy in mathematics, he allows<br>
the AI to have â?opersonal thoughtsâ?. The AI equivalent of mindfulness<br>
(straight out of Marcus Aurelius). He gives the impression that he feeds <br>
</blockquote>
</blockquote>
back<br>
<blockquote type="cite">
<blockquote type="cite">the AI's response to the AI, so the AI can have a think about what it is<br>
thinking. A bit like briefing a fish on the concept of water.<br>
<br>
This supports my pitch for engineers to be taught philosophy. The concepts<br>
of virtue and evil were defined simply 2000 years ago. These definitions <br>
</blockquote>
</blockquote>
have<br>
<blockquote type="cite">
<blockquote type="cite">stood outside of time and can be implemented with appropriate pattern<br>
matching.<br>
If Marcus Aurelius was reincarnated, he'd be running international seminars<br>
on mindfulness. The foundation ideas have not changed a wit.<br>
<br>
Cheers<br>
Les<br>
<br>
<br>
<br>
<blockquote type="cite">C2 - Confidential<br>
<br>
Answer to the first question. Once the NN is trained, it will always<br>
give the<br>
</blockquote>
same output for the same input. It is stupid software logic.<br>
<blockquote type="cite"><br>
Answer to the second question. The current safety paradigm is based on<br>
</blockquote>
perception-interpretation-decision-action loops based on the fact that:<br>
<blockquote type="cite">1 Either, all states of the system, leading to all identified losses,<br>
can be<br>
</blockquote>
exhaustively defined and their probability of occurrences remains, by<br>
concept and construction of the system, below a tolerable level, enabling<br>
fully autonomous functions (e.g. emergency shutdown functions in process<br>
industries),<br>
<blockquote type="cite"><br>
2 Or, there is a human in control of the system with the ability of<br>
</blockquote>
arbitrating the decisions, meaning the ability to at least perceive and<br>
interpret.<br>
<blockquote type="cite"><br>
AI deceives point 1 as it impossible, by construction, to identify all<br>
</blockquote>
dangerous situations (the training dataset is incomplete and whatever its<br>
size, the operational domain being infinite (or close to infinite from a <br>
</blockquote>
</blockquote>
practical<br>
<blockquote type="cite">
<blockquote type="cite">point of view), training dataset size divided by infinite = 0.<br>
<blockquote type="cite"><br>
To deceive the second assumption, just replace a car windscreen by a<br>
video<br>
</blockquote>
screen. No need of AI.<br>
<blockquote type="cite"><br>
AttentionÃ, : due to increased cybersecurity screening I may receive<br>
external<br>
</blockquote>
emails 30 minutes after their emission.<br>
<blockquote type="cite">RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert Program Support Manager<br>
Optronics and Defence Division/Customer Support Safran Electronics &<br>
Defense<br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64<br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE<br>
https://urldefense.com/v3/__http://www.safran-electronics-<br>
</blockquote>
defense.com_<br>
<blockquote type="cite">_;!!Dl6pPzL6!fVuNoqxkdCCqescc5hd8-9ke7-<br>
</blockquote>
o3uVuZEOBtugdgtfrHUBcO3T2a6LyI_<br>
<blockquote type="cite">lZBwk7CZBJRmOhVQnZHXCdvIB_h5MY$<br>
<br>
<blockquote type="cite">-----Message d'origine-----<br>
DeÃ, : Les Chambers <les@chambers.com.au> EnvoyÃf©Ã, : jeudi 22 juin<br>
2023 23:46 Ãfâ,¬Ã, : RICQUE Bertrand (SAFRAN ELECTRONICS & DEFENSE)<br>
<bertrand.ricque@safrangroup.com>; koopman.cmu@gmail.com;<br>
systemsafety@lists.techfak.uni-bielefeld.de<br>
ObjetÃ, : RE: [SystemSafety] AI and the virtuous test Oracle<br>
<br>
CAUTION: This message originated from an outside organization. In<br>
case of suspicion, click on "Report to SAFRAN Security" from the <br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
Outlook<br>
<blockquote type="cite">
<blockquote type="cite">ribbon.<br>
<blockquote type="cite">
<blockquote type="cite"><br>
RE your comment, ââ,¬Å"A neural network is software logic.ââ,¬Â<br>
<br>
A neural Network is a component of an AI agent. An AI agent is a<br>
suite of general purpose software tools. The agent is configured from a<br>
</blockquote>
</blockquote>
dataset.<br>
<blockquote type="cite">
<blockquote type="cite"><br>
ChatGPT goes on (see dialog below): ââ,¬Å"Synthesizing an AI agent<br>
involves training the machine learning model on a large <br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
dataset.ââ,¬Â<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
This indicates that the training dataset determines the agentââ,¬â"¢s<br>
behaviour (transfer function - inputs to outputs). In the case of a<br>
Tesla motor<br>
</blockquote>
</blockquote>
vehicle,<br>
<blockquote type="cite">
<blockquote type="cite">the inputs are camera sensors and the outputs will soon be controls<br>
to the accelerator, brake and steering.<br>
<br>
RE your comment, ââ,¬Å"The problem is not AI. It is the exhaustive<br>
</blockquote>
</blockquote>
deterministic<br>
<blockquote type="cite">
<blockquote type="cite">relationship between the undesired losses and the system states.<br>
Anything breaking this, breaks the current safety paradigm. And AI<br>
is not the single thing to achieve this.ââ,¬Â<br>
<br>
Could you please expand on this idea. Possibly using an example.<br>
<br>
How would your thought apply here: I am a driver at the wheel of a<br>
Tesla travelling at 110 km/h on the freeway. I have switched the<br>
vehicle to full<br>
</blockquote>
</blockquote>
self-<br>
<blockquote type="cite">
<blockquote type="cite">driving mode. I am fast asleep. The FSD agent detects a false<br>
positive ââ,¬â?o<br>
</blockquote>
</blockquote>
a lady<br>
<blockquote type="cite">
<blockquote type="cite">with a pram. It jams on the brakes initiating a rear-end collision<br>
with a following semi.<br>
Q: In this case how is it that, ââ,¬Å"The problem is not AIââ,¬Â<br>
<br>
Les<br>
<br>
ââ,¬â?ââ,¬â?ââ,¬â?ââ,¬â?ââ,¬â? ChatGPT Dialog <br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
ââ,¬â?ââ,¬â?ââ,¬â?ââ,¬â?ââ,¬â?ââ,¬â?<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
ChatGPT prompt:<br>
ââ,¬Å"Describe an AI agent from a technical perspective. What are its<br>
</blockquote>
</blockquote>
components<br>
<blockquote type="cite">
<blockquote type="cite">and<br>
how is it synthesized?ââ,¬Â<br>
<br>
Response:<br>
An AI agent, from a technical perspective, typically consists of<br>
several components that work together to perform specific tasks.<br>
Here are the key components and how they are synthesized:<br>
<br>
1. Data: The foundation of any AI agent is data. It can be in<br>
various forms such as text, images, audio, or structured data. This<br>
data is used for<br>
</blockquote>
</blockquote>
training<br>
<blockquote type="cite">
<blockquote type="cite">and providing context to the AI model.<br>
<br>
2. Machine Learning Model: The AI agent utilizes a machine learning<br>
model<br>
</blockquote>
</blockquote>
to<br>
<blockquote type="cite">
<blockquote type="cite">process and generate responses. These models are trained on vast<br>
amounts of data and learn patterns to make predictions or generate<br>
outputs.<br>
<br>
3. Natural Language Processing (NLP): NLP is a crucial component for<br>
understanding and processing human language. It involves tasks like<br>
language translation, sentiment analysis, intent recognition, and<br>
entity extraction.<br>
</blockquote>
</blockquote>
NLP<br>
<blockquote type="cite">
<blockquote type="cite">enables the AI agent to comprehend user inputs and generate relevant<br>
responses.<br>
<br>
4. Knowledge Representation: Knowledge representation refers to how<br>
information is stored and organized within the AI agent. It can<br>
employ various<br>
</blockquote>
</blockquote>
techniques<br>
<blockquote type="cite">
<blockquote type="cite">such as semantic networks, ontologies, or graph databases to store<br>
facts<br>
</blockquote>
</blockquote>
and<br>
<blockquote type="cite">
<blockquote type="cite">relationships. This allows the agent to access and utilize knowledge<br>
when responding to user queries.<br>
<br>
5. Dialogue Management: Dialogue management focuses on handling the<br>
flow of conversation. It involves tracking the dialogue context,<br>
managing user requests and responses, and generating meaningful and<br>
coherent replies.<br>
Reinforcement<br>
learning techniques can be applied to improve dialogue management.<br>
<br>
6. Integration and APIs: AI agents can integrate with external<br>
systems and services using APIs (Application Programming<br>
Interfaces). This allows them<br>
</blockquote>
</blockquote>
to<br>
<blockquote type="cite">
<blockquote type="cite">access relevant data sources, databases, or external functionalities<br>
to provide more comprehensive and accurate responses.<br>
<br>
7. User Interface: The AI agent needs a user interface to interact<br>
with<br>
</blockquote>
</blockquote>
users<br>
<blockquote type="cite">
<blockquote type="cite">effectively. It can be a chat interface, voice-based interface, or a<br>
combination of both. The UI should provide a seamless experience for<br>
users to interact with the agent.<br>
<br>
Synthesizing an AI agent involves training the machine learning<br>
model on a large dataset. This process includes steps like<br>
preprocessing the data, designing the model architecture, selecting<br>
appropriate algorithms,<br>
</blockquote>
</blockquote>
training<br>
<blockquote type="cite">
<blockquote type="cite">the model using optimization techniques, and fine-tuning it for<br>
specific<br>
</blockquote>
</blockquote>
tasks.<br>
<blockquote type="cite">
<blockquote type="cite">The trained model is then integrated with other components, such as<br>
NLP, knowledge representation, and dialogue management, to create a<br>
functioning AI agent. Regular updates, feedback loops, and<br>
continuous improvement are essential to enhance the agent's<br>
performance over time.<br>
<br>
--------END Chat ----------<br>
<br>
<br>
<blockquote type="cite">C2 - Confidential<br>
<br>
The problem is not AI. It is the exhaustive deterministic<br>
relationship<br>
</blockquote>
between the undesired losses and the system states. Anything<br>
breaking this, breaks the current safety paradigm. And AI is not the<br>
single thing to<br>
</blockquote>
</blockquote>
achieve<br>
<blockquote type="cite">
<blockquote type="cite">this.<br>
<blockquote type="cite"><br>
AttentionÃfâ?sÃ, : due to increased cybersecurity screening I may<br>
receive<br>
</blockquote>
external<br>
emails 30 minutes after their emission.<br>
<blockquote type="cite">RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert Program Support Manager<br>
Optronics and Defence Division/Customer Support Safran Electronics<br>
& Defense<br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64<br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE<br>
https://urldefense.com/v3/__http://www.safran-electronics-<br>
</blockquote>
defense.com__;!!Dl6pPzL6!bg5nEqH6ID136htdDa-<br>
<br>
</blockquote>
</blockquote>
DgyRz2IZw9arqA9HPKC3p01ZYHoQhzebcyOn5xrgApDNO52A_sLRMk2YgX63j<br>
<blockquote type="cite">
<blockquote type="cite">n5fk7M4$<br>
<blockquote type="cite"><br>
<blockquote type="cite">-----Message d'origine-----<br>
DeÃfâ?sÃ, : systemsafety <systemsafety-bounces@lists.techfak.uni-<br>
</blockquote>
</blockquote>
bielefeld.de><br>
<blockquote type="cite">
<blockquote type="cite">De la part de Phil Koopman<br>
EnvoyÃfÆ'Ã,©Ãfâ?sÃ, : jeudi 22 juin 2023 03:32 <br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
ÃfÆ'ââ?s¬Ãfâ?sÃ, :<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">les@chambers.com.au; systemsafety@lists.techfak.uni-<br>
</blockquote>
</blockquote>
bielefeld.de<br>
<blockquote type="cite">
<blockquote type="cite">ObjetÃfâ?sÃ, : Re: [SystemSafety] AI and the virtuous test Oracle<br>
<br>
CAUTION: This message originated from an outside organization.<br>
In case<br>
</blockquote>
</blockquote>
of<br>
<blockquote type="cite">
<blockquote type="cite">suspicion, click on "Report to SAFRAN Security" from the Outlook<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
ribbon.<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
Les,<br>
<br>
Since you welcome riffs, I have something that is not as all-<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
encompassing,<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">but might have more immediate application.<br>
<br>
I propose that to the degree that "AI" technology is deployed in<br>
a way<br>
</blockquote>
</blockquote>
that<br>
<blockquote type="cite">
<blockquote type="cite">supplants practical human judgement, the manufacturer of that<br>
system<br>
</blockquote>
</blockquote>
(in<br>
<blockquote type="cite">
<blockquote type="cite">some cases just the AI part if it is an add-on component) should<br>
be<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
held<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">accountable for any action (or inaction) that, if associated<br>
with the<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
human<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">that was supplanted, would have constituted negligence.Ãfâ?sÃ,Â
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
This<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">should<br>
</blockquote>
</blockquote>
include<br>
<blockquote type="cite">
<blockquote type="cite">situations in which a human is put in an untenable situation of<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
supervising<br>
<blockquote type="cite">
<blockquote type="cite">an<br>
<blockquote type="cite">
<blockquote type="cite">AI in a way that puts unreasonable demands upon them, amounting<br>
to a "moral crumple zone"<br>
approach<br>
<br>
</blockquote>
</blockquote>
(https://urldefense.com/v3/__https://papers.ssrn.com/sol3/papers.cfm<br>
?ab<br>
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
stract_id=2757236__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUivQZX7h$ ).<br>
Liability/negligence if an AI is in substantive control of such<br>
a<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
situation<br>
<blockquote type="cite">
<blockquote type="cite">should<br>
<blockquote type="cite">
<blockquote type="cite">attach to the manufacturer.<br>
<br>
This leads to a more narrow oracle, but perhaps still useful,<br>
than you propose. If a loss event is caused by a lack of<br>
"reasonable" behavior<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
by an<br>
<blockquote type="cite">
<blockquote type="cite">AI,<br>
<blockquote type="cite">
<blockquote type="cite">the manufacturer is on the hook for negligence, and the<br>
AI/manufacturer owes a duty of care the same as the human who<br>
was supplanted would<br>
</blockquote>
</blockquote>
have<br>
<blockquote type="cite">
<blockquote type="cite">owed to whoever/whatever might be affected by that negligence.<br>
It has<br>
</blockquote>
</blockquote>
the<br>
<blockquote type="cite">
<blockquote type="cite">advantage of reusing existing definitions of "reasonable person"<br>
that<br>
</blockquote>
</blockquote>
have<br>
<blockquote type="cite">
<blockquote type="cite">been hammered out over decades of law. (To be sure that is not<br>
in the<br>
</blockquote>
</blockquote>
form<br>
<blockquote type="cite">
<blockquote type="cite">of an engineering specification, but case law has a pretty<br>
robust set<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
of<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">precedents, such as crashing into something after your properly<br>
</blockquote>
</blockquote>
functioning<br>
<blockquote type="cite">
<blockquote type="cite">vehicle ran a red light is likely to lead to the driver being<br>
found<br>
</blockquote>
</blockquote>
negligent.)<br>
<blockquote type="cite">
<blockquote type="cite"><br>
This does not require the AI to behave the same as people, and<br>
is not a<br>
</blockquote>
</blockquote>
full<br>
<blockquote type="cite">
<blockquote type="cite">recipe for "safe" AI. But it puts a floor on things in a way<br>
that is<br>
</blockquote>
</blockquote>
readily<br>
<blockquote type="cite">
<blockquote type="cite">actionable using existing legal mechanisms and theories. If a<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
reasonable<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">person would have avoided a harm, any AI that fails to avoid the<br>
harm<br>
</blockquote>
</blockquote>
would<br>
<blockquote type="cite">
<blockquote type="cite">be negligent.<br>
<br>
I've worked with a lawyer to propose this approach for automated<br>
</blockquote>
</blockquote>
vehicles,<br>
<blockquote type="cite">
<blockquote type="cite">and it is starting to get some traction. What I write in this<br>
post<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
(above)<br>
<blockquote type="cite">
<blockquote type="cite">is a<br>
<blockquote type="cite">
<blockquote type="cite">generalization of the concept beyond the narrow automated<br>
vehicle application.<br>
Details here:<br>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
https://urldefense.com/v3/__https://safeautonomy.blogspot.com/2023/0<br>
<blockquote type="cite">
<blockquote type="cite">5<br>
<blockquote type="cite">
<blockquote type="cite">/a-liability-approach-for-<br>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
automated.html__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUh295b5I$<br>
<br>
-- Phil<br>
<br>
<br>
On 6/21/2023 7:14 PM, Les Chambers wrote:<br>
<blockquote type="cite">Hi All<br>
<br>
I find myself reflecting on what will become of us.<br>
As systems engineering best practice is overrun by AI.<br>
<br>
Practitioners report that neural networks are eating code.<br>
Example 1: The vector field surrounding a Tesla motor vehicle<br>
is an output of a neural network, not the result of software<br>
logic. Soon<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
the<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">neural net - not code - will generate controls. The size of<br>
the code base is reducing. (Elon<br>
Musk)<br>
Example 2: the ChatGPT transformer code base is only 2000 LOC<br>
(Mo Gawdat<br>
https://urldefense.com/v3/__https://youtu.be/bk-<br>
</blockquote>
nQ7HF6k4__;!!Dl6pPzL6!<br>
<blockquote type="cite"><br>
</blockquote>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xN<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">rnA0Av<br>
<blockquote type="cite">-DS0QOuOFaUpIawiVG$ )<br>
<br>
The intelligence resides in terabytes of data, perceptrons and<br>
millions of weighting parameters. All are gathered by<br>
automated<br>
</blockquote>
</blockquote>
</blockquote>
means.<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">Not subject to human review.<br>
<br>
Ergo what will become of our trusty barriers to dangerous <br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
failure:<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">1. Safety functions - gone<br>
2. Verification - gone<br>
3. Code reviews - gone<br>
4. Validation - How?<br>
<br>
On validation, may I suggest the moral AI. A test oracle built<br>
on a virtuous dataset, capable of interrogating the target<br>
system to determine virtue. Test outcomes will morph from<br>
pass/failure to<br>
</blockquote>
moral/immoral.<br>
<blockquote type="cite"><br>
Credible industry players have predicted that soon we will<br>
have AIs orders of magnitude smarter than us. Especially when<br>
they start<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
talking<br>
<blockquote type="cite">
<blockquote type="cite">to<br>
<blockquote type="cite">
<blockquote type="cite">each other.<br>
<blockquote type="cite">The bandwidth will be eye-watering - the increase in<br>
intelligence,<br>
</blockquote>
</blockquote>
</blockquote>
vertical.<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
New barriers are required. Time to develop an AI that is on<br>
our side<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
Ãf¢ââ?s‰â,¬Å"<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">the side of ethics and the moral life. An adult in the room if<br>
you like. We should birth this creature now and raise it as good<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
parents.<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
Let us not panic. May I put the proposition: virtue, like<br>
creativity, can be algorithmic.<br>
I have a sense of starting from the beginning - tabula rasa. I<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
suggest<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">that high-level thinking on the subject could begin with<br>
ChatGPT<br>
</blockquote>
</blockquote>
</blockquote>
prompts:<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">1. What is the stoic philosopherÃf¢ââ?s‰â?z¢s concept of
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
virtue?<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">2. What are the elements of philosophy relevant to AI?<br>
<br>
Let us not forget our engineering mission: Guardians of the<br>
divine Logos, the organizing principle of the universe,<br>
responsible for its creation, maintenance, and order.<br>
<br>
Would anyone care to riff on this?<br>
<br>
Les<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
systemsengineeringblog.com<br>
<br>
+61 (0)412 648 992<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription:<br>
https://urldefense.com/v3/__https://lists.techfak.uni-<br>
</blockquote>
</blockquote>
</blockquote>
bielefeld.de/mai<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
lman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaC<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">GJCz<br>
<blockquote type="cite">dLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
DS0QOuOFaUjXMdlNF$<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
<br>
--<br>
Prof. Phil Koopman koopman@cmu.edu<br>
(he/him)<br>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
https://urldefense.com/v3/__https://users.ece.cmu.edu/*koopman/__;fg!<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YM<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">f2lF9o5xNrnA0Av-DS0QOuOFaUnsUrevc$<br>
<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription:<br>
https://urldefense.com/v3/__https://lists.techfak.uni-<br>
<br>
</blockquote>
</blockquote>
<br>
</blockquote>
</blockquote>
bielefeld.de/mailman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVL<br>
<blockquote type="cite">
<blockquote type="cite">cd<br>
<blockquote type="cite">
<blockquote type="cite">XXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-<br>
DS0QOuOFaUjXMdlNF$<br>
</blockquote>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent<br>
contenir des<br>
</blockquote>
informations confidentielles, ÃfÆ'Ã,ªtre soumis aux <br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
rÃfÆ'Ã,¨glementations<br>
<blockquote type="cite">
<blockquote type="cite">relatives au<br>
<blockquote type="cite">
<blockquote type="cite">contrÃfÆ'Ã,´le des exportations ou ayant un caractÃfÆ'Ã,¨re
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
privÃfÆ'Ã,©.<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">S'ils ne<br>
</blockquote>
</blockquote>
vous sont<br>
<blockquote type="cite">
<blockquote type="cite">pas destinÃfÆ'Ã,©s, nous vous signalons qu'il est strictement interdit<br>
de les divulguer, de les reproduire ou d'en utiliser de quelque<br>
maniÃfÆ'Ã,¨re que ce<br>
</blockquote>
</blockquote>
soit<br>
<blockquote type="cite">
<blockquote type="cite">le contenu. Toute exportation ou rÃfÆ'Ã,©exportation non
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
autorisÃfÆ'Ã,©e<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">est interdite Si ce message vous a ÃfÆ'Ã,©tÃfÆ'Ã,© transmis par
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
erreur,<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">merci d'en informer l'expÃfÆ'Ã,©diteur et de supprimer<br>
</blockquote>
</blockquote>
immÃfÆ'Ã,©diatement<br>
<blockquote type="cite">
<blockquote type="cite">de votre systÃfÆ'Ã,¨me informatique ce courriel ainsi que tous les<br>
documents qui y sont attachÃfÆ'Ã,©s."<br>
<blockquote type="cite">******<br>
" This e-mail and any attached documents may contain confidential<br>
or<br>
</blockquote>
proprietary information and may be subject to export control laws<br>
and regulations. If you are not the intended recipient, you are<br>
notified that<br>
</blockquote>
</blockquote>
any<br>
<blockquote type="cite">
<blockquote type="cite">dissemination, copying of this e-mail and any attachments thereto or<br>
use of their contents by any means whatsoever is strictly<br>
prohibited. Unauthorized export or re-export is prohibited. If you<br>
have received this e-mail in<br>
</blockquote>
</blockquote>
error,<br>
<blockquote type="cite">
<blockquote type="cite">please advise the sender immediately and delete this e-mail and all<br>
</blockquote>
</blockquote>
attached<br>
<blockquote type="cite">
<blockquote type="cite">documents from your computer system."<br>
<blockquote type="cite">#<br>
</blockquote>
<br>
<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
</blockquote>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent contenir<br>
des<br>
</blockquote>
informations confidentielles, Ãfªtre soumis aux rÃf¨glementations <br>
</blockquote>
</blockquote>
relatives au<br>
<blockquote type="cite">
<blockquote type="cite">contrÃf´le des exportations ou ayant un caractÃf¨re privÃf©. S'ils ne
<br>
</blockquote>
</blockquote>
vous sont<br>
<blockquote type="cite">
<blockquote type="cite">pas destinÃf©s, nous vous signalons qu'il est strictement interdit de les<br>
divulguer, de les reproduire ou d'en utiliser de quelque maniÃf¨re que ce <br>
</blockquote>
</blockquote>
soit<br>
<blockquote type="cite">
<blockquote type="cite">le contenu. Toute exportation ou rÃf©exportation non autorisÃf©e est<br>
interdite Si ce message vous a Ãf©tÃf© transmis par erreur, merci d'en<br>
informer l'expÃf©diteur et de supprimer immÃf©diatement de votre<br>
systÃf¨me informatique ce courriel ainsi que tous les documents qui y sont<br>
attachÃf©s."<br>
<blockquote type="cite">******<br>
" This e-mail and any attached documents may contain confidential or<br>
</blockquote>
proprietary information and may be subject to export control laws and<br>
regulations. If you are not the intended recipient, you are notified that <br>
</blockquote>
</blockquote>
any<br>
<blockquote type="cite">
<blockquote type="cite">dissemination, copying of this e-mail and any attachments thereto or use of<br>
their contents by any means whatsoever is strictly prohibited. Unauthorized<br>
export or re-export is prohibited. If you have received this e-mail in <br>
</blockquote>
</blockquote>
error,<br>
<blockquote type="cite">
<blockquote type="cite">please advise the sender immediately and delete this e-mail and all
<br>
</blockquote>
</blockquote>
attached<br>
<blockquote type="cite">
<blockquote type="cite">documents from your computer system."<br>
<blockquote type="cite">#<br>
</blockquote>
<br>
<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
</blockquote>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent contenir des <br>
</blockquote>
informations confidentielles, être soumis aux règlementations relatives au <br>
contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont <br>
pas destinés, nous vous signalons qu'il est strictement interdit de les <br>
divulguer, de les reproduire ou d'en utiliser de quelque manière que ce soit <br>
le contenu. Toute exportation ou réexportation non autorisée est interdite Si <br>
ce message vous a été transmis par erreur, merci d'en informer l'expéditeur <br>
et de supprimer immédiatement de votre système informatique ce courriel ainsi <br>
que tous les documents qui y sont attachés."<br>
<blockquote type="cite">******<br>
" This e-mail and any attached documents may contain confidential or <br>
</blockquote>
proprietary information and may be subject to export control laws and <br>
regulations. If you are not the intended recipient, you are notified that any <br>
dissemination, copying of this e-mail and any attachments thereto or use of <br>
their contents by any means whatsoever is strictly prohibited. Unauthorized <br>
export or re-export is prohibited. If you have received this e-mail in error, <br>
please advise the sender immediately and delete this e-mail and all attached <br>
documents from your computer system."<br>
<blockquote type="cite">#<br>
</blockquote>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety<br>
</div>
</div>
</div>
<br>
</div>
<span>_______________________________________________</span><br><span>The System Safety Mailing List</span><br><span>systemsafety@TechFak.Uni-Bielefeld.DE</span><br><span>Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety</span></div></blockquote></body></html>