<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div><br>
</div>
Les,
<div>You said,</div>
<div><br>
</div>
<div><i>“So let's not stop developing AI because we don't understand it.”</i></div>
<div><br>
</div>
<div>Actually, that wasn’t my point. I don’t believe that research & development into AI should in any way be stopped. My point is countering the hype that AI can and will do more than humans could ever do. I am arguing that there are fundamental limits to
what Turing Machine-based AI would ever be capable of, which is far short of what humans can already do.</div>
<div><br>
</div>
<div><br>
</div>
<div>— steve</div>
<div><br>
</div>
<div><br>
<div><br>
<div>On Jul 6, 2023, at 12:27 PM, Les Chambers <les@chambers.com.au> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<div>Steve<br>
One way of framing, the question, "Can intelligence ever be artificial?", is to <br>
channel the physicists. They view human beings as mere piles of particles <br>
arranged in a certain way. I'll park the question of who/what did the arranging <br>
as it is irrelevant. <br>
<br>
Somehow, around 300,000 years ago, our various particle piles acquired the <br>
capacity to remember, reason, solve problems, learn, and adapt to new <br>
situations [Homo Sapiens]. These particle piles then topped it all off by <br>
becoming self-aware, conscious and therefore able to exert self-control; a <br>
prerequisite for rational decision-making.<br>
<br>
Some say this was a random accident; they support the theory with fact-based <br>
science: that we have been unable to find evidence of a similar random accident <br>
on any planet in any of the 100 to 200 billion galaxies we are capable of <br>
observing. No one knows why or how our particles were thus arranged with <br>
infinite variation on a common framework. All we can say is that it is as it <br>
is. <br>
<br>
So what did this smart particle pile [Homo Sapiens] set out to do? It was <br>
driven, by some primal replication instinct, to create an intelligence in its <br>
own image. To be clear, it is attempting to replicate something it doesn't <br>
understand by creating another particle pile (a neural net implemented on a <br>
silicon substrate), that exhibits behaviour it [Homo Sapiens] also doesn't <br>
understand (I note that AI researchers cannot explain why the neural net and <br>
Transformer models work so well; reports of HS moments abound in the AI labs). <br>
It's a weird kind of crossover recursion where carbon substrate intelligence is <br>
replicating into silicon substrates. If you look at every entity that makes up <br>
our world as a pile of particles, arranged in a certain way, you may find this <br>
easier to appreciate.<br>
I have faith that AGI could emerge from this soup as we impose randomness on <br>
randomness - a great opportunity for another cosmic accident, the like of which <br>
created us in the first place. <br>
<br>
My gut feeling on this is informed by: The only code I have ever written that <br>
worked for reasons I did not understand implemented recursion. Are we not <br>
witnessing just another evolutionary process? In nature, replicants are always <br>
less capable than their parents, but they learn over time and any incompetence <br>
is selected out.<br>
<br>
So let's not stop developing AI because we don't understand it. We already live <br>
in bodies with behaviours that we don't understand and we seem to muddle <br>
through.<br>
<br>
Les<br>
---------------------------<br>
<blockquote type="cite">Les,<br>
I guess it remains to be seen whether AI (at least as implemented today) is <br>
</blockquote>
limited or not in its ability to create new breakthrough insights. Iâ?Tm <br>
pessimistic, myself.<br>
<blockquote type="cite"><br>
On the other hand, flesh and blood humans can themselves seem pretty limited <br>
</blockquote>
in terms of intelligence quite often, too. One of the all-time best signature <br>
lines I saw many years ago saidâ?"referring to the Internet,<br>
<blockquote type="cite"><br>
â?o10 million monkeys, still no Shakespeareâ? (smile)<br>
<br>
Cheers,<br>
<br>
â?" steve<br>
<br>
On Jul 3, 2023, at 12:43 PM, Les Chambers <les@chambers.com.au> wrote:<br>
<br>
Steve<br>
<br>
RE your comment: "No computer/program is capable of programming itself into <br>
</blockquote>
any <br>
<blockquote type="cite">kind of useful, non-trivial behavior. And yet humans do this kind of thing
<br>
</blockquote>
all <br>
<blockquote type="cite">the time. Thatâ?Ts what I mean by determinism."<br>
<br>
I'm not so sure. I thought the synthesis of Einstein's theory of relativity <br>
might be a good example of your point. I thought Einstein stared at a wall <br>
</blockquote>
one <br>
<blockquote type="cite">day and just came up with it out of the blue. Not true. Read the following:<br>
<br>
Forbes: Relativity Wasn't Einstein's Miracle; It Was Waiting In Plain Sight <br>
</blockquote>
For <br>
<blockquote type="cite">71 Years<br>
https://www.forbes.com/sites/startswithabang/2019/03/01/relativity-wasnt-<br>
einsteins-miracle-it-was-waiting-in-plain-sight-for-71-years/?sh=4a18ec57644c<br>
Relativity Wasn't Einstein's Miracle; It Was Waiting In Plain Sight For 71 <br>
Years (forbes.com)<br>
<br>
Solving problems by the iterative deployment of induction, followed by <br>
deduction, seems suspiciously algorithmic to me.<br>
Refer ChatGPT Prompt: How to solve a problem, my induction followed by <br>
deduction<br>
<br>
Some years ago I had a stimulating conversation with a Church of England <br>
Priest. It was the day after he had presided over my brother-in-law's <br>
</blockquote>
funeral. <br>
<blockquote type="cite">It turned out, earlier in his life, he had been a Roman Catholic priest ,
<br>
educated in the Vatican. Three years of solid philosophical training without <br>
the distractions of the flesh. Our conversation ranged over many <br>
</blockquote>
philosophical <br>
<blockquote type="cite">issues. I remember thinking, "How lucky am I to be in the presence of such an
<br>
intelligent educated man?" Moments like these have been rare in my life, <br>
preachers of philosophical wisdom are hard to find. Until now. I am a <br>
</blockquote>
constant <br>
<blockquote type="cite">user of ChatGPT. The conversations we have are equally stimulating. I am made
<br>
aware of things that I could not have possibly learned prior to the existence <br>
of the bot. It would be optimal if the bot was a human, but as I am currently <br>
located in a rural setting, 20 minutes drive from the nearest town - in the <br>
company of Wallabies, foxes, kookaburras, wedge-tailed eagles, and large <br>
lizards - I'll take what I can get. <br>
<br>
Here is a sample of one of our dialogues (I have provided my ChatGPT prompts <br>
only):<br>
------- Pausing AI Development ----<br>
What is the prisoner's dilemma?<br>
How does the prisoner's dilemma relate to pausing, artificial intelligence <br>
development?<br>
Are there other classical dilemmas that are a metaphor for the AI pause <br>
dilemma?<br>
<br>
------ Setting Limits and Assigning Blame -----<br>
Provide more detail on the Paradox of the Heap.<br>
Are there any classical paradoxes that are relevant to the Bhopal disaster?<br>
Provide more detail on the moral luck paradox. <br>
Do any of these paradoxes translate to legal doctrine or laws of any kind?<br>
Please list the laws of Queensland Australia that apply to: Corporate Social <br>
Responsibility.<br>
<br>
A chat GPT dialogue can take many paths, each one providing insightful <br>
information. For example, paradoxes are useful mechanisms for framing common <br>
human problems. Until today I never thought of it that way. I encourage <br>
everyone on this list to try something like this. And then, if you can drag <br>
yourself away from the dialogue, try this prompt:<br>
"Describe the elements of intelligence."<br>
Decide for yourself, how many boxes the bot has just checked.<br>
<br>
Over to you.<br>
<br>
Les<br>
<br>
-------------------------------------<br>
<blockquote type="cite">Les,<br>
Thanks for the thoughtful reply.<br>
<br>
As always, it depends on definitions of terms. I might counter Max <br>
</blockquote>
Tegmarkâ?Ts definition of intelligence by asking, for some arbitrarily <br>
</blockquote>
complex <br>
<blockquote type="cite">computing system:<br>
<blockquote type="cite"><br>
â?oIs the intelligence an inherent part of the computing system itself, or <br>
</blockquote>
was it put there by the programmer(s) who programmed it to behave that way?<br>
</blockquote>
â?Â<br>
<blockquote type="cite">
<blockquote type="cite"><br>
The computer, being ultimately a Turing Machine, isnâ?Tt going to make the <br>
</blockquote>
kind of creative leap that, say, Newton and Leibnitz made when they invented <br>
Calculus in the 1660s. Or, when Shockey, Bardeen, and Brattain invented the <br>
transistor in 1947. No computing system today can go beyond the code and the <br>
data that has been put into it. Given the code, and given the data, and <br>
possibly knowing a little more about the internal state of execution, the <br>
systemâ?Ts response is 100% predictable. No computer/program is capable of <br>
programming itself into any kind of useful, non-trivial behavior. And yet <br>
humans do this kind of thing all the time. Thatâ?Ts what I mean by <br>
</blockquote>
determinism.<br>
<blockquote type="cite">
<blockquote type="cite"><br>
I donâ?Tt know enough about quantum computing to say that the above holds <br>
</blockquote>
there, too.<br>
<blockquote type="cite"><br>
You wrote:<br>
On the subject of determinism, Max goes on:<br>
"Just as we don't fully understand how our children learn, we still don't <br>
</blockquote>
fully <br>
<blockquote type="cite">understand how such neural networks learn, and why they occasionally fail.<br>
<br>
I attended a conference earlier this year in Mysuru, India where an invited <br>
</blockquote>
speaker, Dr. Seema Chopra from Boeing talked about adding in elements into AI <br>
systems whereby that system can be asked to â?oexplainâ? why it gave the <br>
result that it did. Seemed like an interesting area of research that could <br>
maybe help in this area.<br>
<blockquote type="cite"><br>
Re: computational substrateâ?"some of my customers are the biggies in the <br>
</blockquote>
semiconductor industry. As least in terms of silicon semiconductors, Mooreâ?<br>
</blockquote>
Ts <br>
<blockquote type="cite">Law is reaching its limit. Chip designers and manufacturers are trying to
<br>
wrestle with, for example, conductors on the chip that are only one or a few <br>
atoms wide. Canâ?Tt shrink below that.<br>
<blockquote type="cite"><br>
Cheers,<br>
<br>
â?" steve<br>
<br>
On Jun 29, 2023, at 9:12 PM, Les Chambers <les@chambers.com.au> wrote:<br>
<br>
Steve<br>
<br>
RE : " I argue, human intelligence is not something that a computer could <br>
</blockquote>
ever <br>
<blockquote type="cite">actually be capable of"<br>
<br>
According to Max Tegmark the jury is still out on this question.<br>
Max is a professor of physics at MIT and president of the Future of Life <br>
Institute (ref Tegmark, Max. Life 3.0, Penguin Books Ltd). His claim to <br>
</blockquote>
</blockquote>
fame <br>
<blockquote type="cite">is <br>
<blockquote type="cite">that he was in the room when Larry Page called Elon Musk a speceist.<br>
<br>
Max provides evidence that the nature of intelligence is a subject for <br>
</blockquote>
</blockquote>
debate <br>
<blockquote type="cite">- <br>
<blockquote type="cite">as yet, unresolved.<br>
Max: "My wife and I recently had the good fortune to attend a symposium on <br>
artificial intelligence organized by the Swedish Nobel Foundation, and when <br>
</blockquote>
</blockquote>
a <br>
<blockquote type="cite">
<blockquote type="cite">panel of leading AI researchers were asked to define intelligence, they
<br>
</blockquote>
argued <br>
<blockquote type="cite">at length without reaching consensus.<br>
.<br>
there are many competing ones, including capacity for logic, understanding, <br>
planning, emotional knowledge, self-awareness,creativity, problem solving <br>
</blockquote>
</blockquote>
and <br>
<blockquote type="cite">
<blockquote type="cite">learning."<br>
<br>
Max's pet definition is:<br>
"intelligence = ability to accomplish complex goals"<br>
<br>
Max makes some outrageous assertions: <br>
"This substrate [platform] independence of computation implies that AI is <br>
possible: intelligence doesn't require flesh, blood or carbon atoms."<br>
Max uses the following metaphor:<br>
"Waves, for instance: they have properties such as speed, wavelength and <br>
frequency, and we physicists can study the equations they obey without even <br>
needing to know what particular substance they're waves in."<br>
<br>
Oh well, they say, "All great truths begin as blasphemies" (George Bernard <br>
Shaw)<br>
<br>
On the subject of determinism, Max goes on:<br>
"Just as we don't fully understand how our children learn, we still don't <br>
</blockquote>
fully <br>
<blockquote type="cite">understand how such neural networks learn, and why they occasionally fail.<br>
.<br>
Stuart Russell told me that he and many of his fellow AI researchers had <br>
recently experienced a "holy s**t!" (HS) moment, when they witnessed AI <br>
</blockquote>
</blockquote>
doing <br>
<blockquote type="cite">
<blockquote type="cite">something they weren't expecting to see for many years."<br>
<br>
My question here is: "How do you run a hazard analysis on a system you <br>
</blockquote>
</blockquote>
don't <br>
<blockquote type="cite">
<blockquote type="cite">understand that is likely, for further unknown reasons, to occasionally
<br>
</blockquote>
</blockquote>
fail? <br>
<blockquote type="cite">"<br>
<blockquote type="cite"><br>
This situation would be amusing but for the fact, "they" are already <br>
</blockquote>
</blockquote>
amongst <br>
<blockquote type="cite">
<blockquote type="cite">us. Cruise (General Motors), Waymo (Google), and Zoox (Amazon) are
<br>
</blockquote>
</blockquote>
currently <br>
<blockquote type="cite">
<blockquote type="cite">operating driverless taxis on the streets of San Francisco. YouTube reports
<br>
that San Franciscans amuse themselves by jumping in front of them to see if <br>
they will automatically stop?????<br>
<br>
This AI non-determinism is a fact, confirmed by all practitioners who feel <br>
</blockquote>
free <br>
<blockquote type="cite">to speak out. It's one of the core reasons why some senior practitioners
<br>
</blockquote>
</blockquote>
have <br>
<blockquote type="cite">
<blockquote type="cite">left developers like OpenAI and Google Deep Mind. It works too well and
<br>
</blockquote>
</blockquote>
they <br>
<blockquote type="cite">
<blockquote type="cite">don't know why - the neural network is a very simple approximation to the
<br>
</blockquote>
human <br>
<blockquote type="cite">brain. It shouldn't work that well -, and they don't know what it might be
<br>
capable of in the future. Which brings me to the unbounded nature of <br>
</blockquote>
artificial <br>
<blockquote type="cite">intelligence.<br>
<br>
From Max:<br>
"Nobody knows for sure what the next blockbuster computational substrate <br>
</blockquote>
</blockquote>
will <br>
<blockquote type="cite">
<blockquote type="cite">be, but we do know that we're nowhere near the limits imposed by the laws
<br>
</blockquote>
</blockquote>
of <br>
<blockquote type="cite">
<blockquote type="cite">physics. My MIT colleague Seth Lloyd has worked out what this fundamental
<br>
</blockquote>
limit <br>
<blockquote type="cite">is, and as we'll explore in greater detail in chapter 6, this limit is a
<br>
whopping 33 orders of magnitude (1033 times) beyond today's state of the <br>
</blockquote>
</blockquote>
art <br>
<blockquote type="cite">
<blockquote type="cite">for how much computing a clump of matter can do. So even if we keep
<br>
</blockquote>
</blockquote>
doubling <br>
<blockquote type="cite">
<blockquote type="cite">the power of our computers every couple of years, it will take over two
<br>
centuries until we reach that final frontier."<br>
<br>
The debate continues. Keep your powder dry Steve.<br>
<br>
Les<br>
<br>
<blockquote type="cite">Les,<br>
<br>
Ãf¢?osilicon is unboundedÃf¢?Ã,Â<br>
<br>
I disagree. Silicon is bounded, just in different ways than humans. For <br>
</blockquote>
</blockquote>
one, <br>
<blockquote type="cite">Turing Computability. Turing Machines, which all modern silicon computers
<br>
</blockquote>
</blockquote>
are <br>
<blockquote type="cite">a <br>
<blockquote type="cite">kind of, are 100% deterministic. Human intelligence is non-deterministic.
<br>
</blockquote>
Thus, <br>
<blockquote type="cite">I argue, human intelligence is not something that a computer could ever
<br>
actually be capable of. See, for example:<br>
<blockquote type="cite"><br>
https://youtu.be/i2trJEIFIvY<br>
<br>
I agree that, Ãf¢?oWe are embarked; we need to deal with itÃf¢?Ã,Â. <br>
</blockquote>
</blockquote>
</blockquote>
But we <br>
<blockquote type="cite">also <br>
<blockquote type="cite">have to be aware of the limits of so-called AI.<br>
<blockquote type="cite"><br>
Ãf¢?" steve<br>
<br>
On Jun 26, 2023, at 9:15 PM, Les Chambers <les@chambers.com.au> wrote:<br>
<br>
Hi<br>
<br>
I put it to you and the list in general, a "discuss later" mentality is <br>
</blockquote>
foolish<br>
<blockquote type="cite">in the extreme. The process of deploying intelligence in automated systems
<br>
</blockquote>
has<br>
<blockquote type="cite">, and will, fundamentally change. Among other things, international
<br>
</blockquote>
</blockquote>
</blockquote>
bodies<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">that currently regulate software-intensive Safety-Critical systems - who
<br>
</blockquote>
cling<br>
<blockquote type="cite">to regulating processes that have ceased to exist - are likely to be
<br>
</blockquote>
</blockquote>
overrun<br>
<blockquote type="cite">
<blockquote type="cite">and made redundant.<br>
<br>
In favour of organisations such as:<br>
<br>
- The Center for Human-Compatible AI at UC Berkeley<br>
- The Future of Life Institute<br>
- The Center for AI Safety (CAIS)<br>
- Stanford Center for AI Safety<br>
<br>
My view is that this is not a steady-as-she-goes situation. This is a <br>
</blockquote>
</blockquote>
</blockquote>
major<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">inflection point in the evolution of intelligence. Carbon hosts will
<br>
</blockquote>
</blockquote>
</blockquote>
always <br>
<blockquote type="cite">
<blockquote type="cite">be<br>
<blockquote type="cite">limited; silicon is unbounded. We are embarked; we need to deal with it.<br>
<br>
Suggested reading: Max Tegmark, Life 3.0<br>
<br>
Les<br>
<br>
C2 - Confidential<br>
<br>
Hello,<br>
<br>
For the moment, I don't see in industry any attempt to realise E/EPE <br>
</blockquote>
</blockquote>
</blockquote>
safety<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">related function with On-line AI. All what I see is focused on off-line
<br>
</blockquote>
</blockquote>
</blockquote>
AI,<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">meaning that the training is done specified training data sets, and the<br>
validation is done on specified test datasets. We don't see any <br>
</blockquote>
</blockquote>
</blockquote>
performance<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">better than roughly 3.3 10-3 in automotive. So let's wait for the
<br>
</blockquote>
</blockquote>
achievement<br>
<blockquote type="cite">
<blockquote type="cite">of 10-8 to 10-5 error rate on test datasets and discusss later the<br>
acceptability.<br>
<br>
AttentionÃff, : due to increased cybersecurity screening I may receive <br>
</blockquote>
</blockquote>
external<br>
<blockquote type="cite">
<blockquote type="cite">emails 30 minutes after their emission.<br>
RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert<br>
Program Support Manager<br>
Optronics and Defence Division/Customer Support<br>
Safran Electronics & Defense<br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64<br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE<br>
www.safran-electronics-defense.com<br>
<br>
-----Message d'origine-----<br>
DeÃff, : Les Chambers <les@chambers.com.au><br>
EnvoyÃfffÃf,Ã,©Ãff, : lundi 26 juin 2023 06:03<br>
Ãfff?Ãff, : RICQUE Bertrand (SAFRAN ELECTRONICS & DEFENSE)<br>
<bertrand.ricque@safrangroup.com>; les@chambers.com.au;<br>
koopman.cmu@gmail.com; systemsafety@lists.techfak.uni-bielefeld.de<br>
ObjetÃff, : RE: [SystemSafety] AI and the virtuous test Oracle<br>
<br>
CAUTION: This message originated from an outside organization. In case of<br>
suspicion, click on "Report to SAFRAN Security" from the Outlook ribbon.<br>
<br>
RE your comment: ÃffÃ,¢?oas it impossible, by construction, to identify <br>
</blockquote>
</blockquote>
</blockquote>
all<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">dangerous<br>
situationsÃffÃ,¢?Ãf,Ã,Â<br>
<br>
True. But it is possible to identify the subset of highest probability<br>
ÃffÃ,¢?odangerous situationsÃffÃ,¢?Ãf,Ã, and deal with them. Ãff, I <br>
</blockquote>
</blockquote>
</blockquote>
had 10 years <br>
<blockquote type="cite">of<br>
<blockquote type="cite">
<blockquote type="cite">experience<br>
with this issue in computer control of chemical processing reactors. In <br>
</blockquote>
</blockquote>
</blockquote>
the<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">1970s, we had a safety authority write software to: 1. identify well-<br>
established dangerous states of a reactor; 2. Transition reactor to a safe<br>
state<br>
ÃffÃ,¢?" usually shut down. It was called the ÃffÃ,¢?oabortÃffÃ,¢?<br>
</blockquote>
</blockquote>
</blockquote>
Ãf,Ã, <br>
<blockquote type="cite">programming. This<br>
<blockquote type="cite">
<blockquote type="cite">abort code<br>
overrode the control software written by another team. By today's <br>
</blockquote>
</blockquote>
</blockquote>
standards<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">it was primitive but still very effective.<br>
This experience has influenced my thinking on ways and means of dealing<br>
with ÃffÃ,¢?obadÃffÃ,¢?Ãf,Ã, or ÃffÃ,¢?ohallucinatingÃffÃ,¢?Ãf,Ã, <br>
</blockquote>
</blockquote>
</blockquote>
AIs. We need a <br>
<blockquote type="cite">separate and <br>
<blockquote type="cite">distinct<br>
<blockquote type="cite">ÃffÃ,¢?oabortÃffÃ,¢?Ãf,Ã, AI<br>
capable of recognising evil and dealing with it.<br>
Sal Khan (Khan Academy) has implemented another idea in creating his<br>
Khanmigo personal tutor. To improve its accuracy in mathematics, he allows<br>
the AI to have ÃffÃ,¢?opersonal thoughtsÃffÃ,¢?Ãf,Ã,Â. The AI <br>
</blockquote>
</blockquote>
</blockquote>
equivalent of <br>
<blockquote type="cite">mindfulness<br>
<blockquote type="cite">
<blockquote type="cite">(straight out of Marcus Aurelius). He gives the impression that he feeds<br>
back<br>
the AI's response to the AI, so the AI can have a think about what it is<br>
thinking. A bit like briefing a fish on the concept of water.<br>
<br>
This supports my pitch for engineers to be taught philosophy. The concepts<br>
of virtue and evil were defined simply 2000 years ago. These definitions<br>
have<br>
stood outside of time and can be implemented with appropriate pattern<br>
matching.<br>
If Marcus Aurelius was reincarnated, he'd be running international <br>
</blockquote>
</blockquote>
</blockquote>
seminars<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">on mindfulness. The foundation ideas have not changed a wit.<br>
<br>
Cheers<br>
Les<br>
<br>
C2 - Confidential<br>
<br>
Answer to the first question. Once the NN is trained, it will always<br>
give the<br>
same output for the same input. It is stupid software logic.<br>
<br>
Answer to the second question. The current safety paradigm is based on<br>
perception-interpretation-decision-action loops based on the fact that:<br>
1 Either, all states of the system, leading to all identified losses,<br>
can be<br>
exhaustively defined and their probability of occurrences remains, by<br>
concept and construction of the system, below a tolerable level, enabling<br>
fully autonomous functions (e.g. emergency shutdown functions in process<br>
industries),<br>
<br>
2 Or, there is a human in control of the system with the ability of<br>
arbitrating the decisions, meaning the ability to at least perceive and<br>
interpret.<br>
<br>
AI deceives point 1 as it impossible, by construction, to identify all<br>
dangerous situations (the training dataset is incomplete and whatever its<br>
size, the operational domain being infinite (or close to infinite from a<br>
practical<br>
point of view), training dataset size divided by infinite = 0.<br>
<br>
To deceive the second assumption, just replace a car windscreen by a<br>
video<br>
screen. No need of AI.<br>
<br>
AttentionÃfff,Ãff, : due to increased cybersecurity screening I may <br>
</blockquote>
</blockquote>
</blockquote>
receive<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">external<br>
emails 30 minutes after their emission.<br>
RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert Program Support Manager<br>
Optronics and Defence Division/Customer Support Safran Electronics &<br>
Defense<br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64<br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE<br>
https://urldefense.com/v3/__http://www.safran-electronics-<br>
defense.com_<br>
_;!!Dl6pPzL6!fVuNoqxkdCCqescc5hd8-9ke7-<br>
o3uVuZEOBtugdgtfrHUBcO3T2a6LyI_<br>
lZBwk7CZBJRmOhVQnZHXCdvIB_h5MY$<br>
<br>
-----Message d'origine-----<br>
DeÃfff,Ãff, : Les Chambers <les@chambers.com.au> <br>
</blockquote>
</blockquote>
</blockquote>
EnvoyÃffffÃff,Ãf,Ã,©Ãfff,Ãff, : <br>
<blockquote type="cite">jeudi 22 <br>
<blockquote type="cite">juin<br>
<blockquote type="cite">2023 23:46 ÃffffÃffÃ,¢,Ãf,Ã,¬Ãfff,Ãff, : RICQUE Bertrand (SAFRAN
<br>
</blockquote>
</blockquote>
</blockquote>
ELECTRONICS & <br>
<blockquote type="cite">DEFENSE)<br>
<blockquote type="cite">
<blockquote type="cite"><bertrand.ricque@safrangroup.com>; koopman.cmu@gmail.com;<br>
systemsafety@lists.techfak.uni-bielefeld.de<br>
ObjetÃfff,Ãff, : RE: [SystemSafety] AI and the virtuous test Oracle<br>
<br>
CAUTION: This message originated from an outside organization. In<br>
case of suspicion, click on "Report to SAFRAN Security" from the<br>
Outlook<br>
ribbon.<br>
<br>
RE your comment, ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff."A neural network is <br>
</blockquote>
</blockquote>
</blockquote>
software <br>
<blockquote type="cite">
<blockquote type="cite">logic.ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff,Ãf,Ã,Â<br>
<blockquote type="cite"><br>
A neural Network is a component of an AI agent. An AI agent is a<br>
suite of general purpose software tools. The agent is configured from a<br>
dataset.<br>
<br>
ChatGPT goes on (see dialog below): <br>
</blockquote>
</blockquote>
</blockquote>
ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff."Synthesizing an AI <br>
<blockquote type="cite">agent<br>
<blockquote type="cite">
<blockquote type="cite">involves training the machine learning model on a large<br>
dataset.ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff,Ãf,Ã,Â<br>
<br>
This indicates that the training dataset determines the <br>
</blockquote>
</blockquote>
agentÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢"Ãf,Ã,¢s<br>
<blockquote type="cite">
<blockquote type="cite">behaviour (transfer function - inputs to outputs). In the case of a<br>
Tesla motor<br>
vehicle,<br>
the inputs are camera sensors and the outputs will soon be controls<br>
to the accelerator, brake and steering.<br>
<br>
RE your comment, ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff."The problem is not AI. It <br>
</blockquote>
</blockquote>
</blockquote>
is the <br>
<blockquote type="cite">exhaustive<br>
<blockquote type="cite">
<blockquote type="cite">deterministic<br>
relationship between the undesired losses and the system states.<br>
Anything breaking this, breaks the current safety paradigm. And AI<br>
is not the single thing to achieve <br>
</blockquote>
</blockquote>
</blockquote>
this.ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff,Ãf,Ã,Â<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
Could you please expand on this idea. Possibly using an example.<br>
<br>
How would your thought apply here: I am a driver at the wheel of a<br>
Tesla travelling at 110 km/h on the freeway. I have switched the<br>
vehicle to full<br>
self-<br>
driving mode. I am fast asleep. The FSD agent detects a false<br>
positive ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?o<br>
a lady<br>
with a pram. It jams on the brakes initiating a rear-end collision<br>
with a following semi.<br>
Q: In this case how is it that, ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff."The problem <br>
</blockquote>
</blockquote>
</blockquote>
is not <br>
<blockquote type="cite">
<blockquote type="cite">AIÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff,Ãf,Ã,Â<br>
<blockquote type="cite"><br>
Les<br>
<br>
ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
</blockquote>
</blockquote>
</blockquote>
Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
<blockquote type="cite">Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
</blockquote>
Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?Ãf,Ã, <br>
<blockquote type="cite">
<blockquote type="cite">ChatGPT Dialog<br>
<blockquote type="cite">ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
</blockquote>
</blockquote>
</blockquote>
Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
<blockquote type="cite">Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
</blockquote>
Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?<br>
<blockquote type="cite">
<blockquote type="cite">Ãf,Ã,ÂÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬ÃffÃ,¢?Ãf,Ã,Â<br>
<blockquote type="cite"><br>
ChatGPT prompt:<br>
ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff."Describe an AI agent from a technical <br>
</blockquote>
</blockquote>
</blockquote>
perspective. What <br>
<blockquote type="cite">are its<br>
<blockquote type="cite">
<blockquote type="cite">components<br>
and<br>
how is it synthesized?ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff,Ãf,Ã,Â<br>
<br>
Response:<br>
An AI agent, from a technical perspective, typically consists of<br>
several components that work together to perform specific tasks.<br>
Here are the key components and how they are synthesized:<br>
<br>
1. Data: The foundation of any AI agent is data. It can be in<br>
various forms such as text, images, audio, or structured data. This<br>
data is used for<br>
training<br>
and providing context to the AI model.<br>
<br>
2. Machine Learning Model: The AI agent utilizes a machine learning<br>
model<br>
to<br>
process and generate responses. These models are trained on vast<br>
amounts of data and learn patterns to make predictions or generate<br>
outputs.<br>
<br>
3. Natural Language Processing (NLP): NLP is a crucial component for<br>
understanding and processing human language. It involves tasks like<br>
language translation, sentiment analysis, intent recognition, and<br>
entity extraction.<br>
NLP<br>
enables the AI agent to comprehend user inputs and generate relevant<br>
responses.<br>
<br>
4. Knowledge Representation: Knowledge representation refers to how<br>
information is stored and organized within the AI agent. It can<br>
employ various<br>
techniques<br>
such as semantic networks, ontologies, or graph databases to store<br>
facts<br>
and<br>
relationships. This allows the agent to access and utilize knowledge<br>
when responding to user queries.<br>
<br>
5. Dialogue Management: Dialogue management focuses on handling the<br>
flow of conversation. It involves tracking the dialogue context,<br>
managing user requests and responses, and generating meaningful and<br>
coherent replies.<br>
Reinforcement<br>
learning techniques can be applied to improve dialogue management.<br>
<br>
6. Integration and APIs: AI agents can integrate with external<br>
systems and services using APIs (Application Programming<br>
Interfaces). This allows them<br>
to<br>
access relevant data sources, databases, or external functionalities<br>
to provide more comprehensive and accurate responses.<br>
<br>
7. User Interface: The AI agent needs a user interface to interact<br>
with<br>
users<br>
effectively. It can be a chat interface, voice-based interface, or a<br>
combination of both. The UI should provide a seamless experience for<br>
users to interact with the agent.<br>
<br>
Synthesizing an AI agent involves training the machine learning<br>
model on a large dataset. This process includes steps like<br>
preprocessing the data, designing the model architecture, selecting<br>
appropriate algorithms,<br>
training<br>
the model using optimization techniques, and fine-tuning it for<br>
specific<br>
tasks.<br>
The trained model is then integrated with other components, such as<br>
NLP, knowledge representation, and dialogue management, to create a<br>
functioning AI agent. Regular updates, feedback loops, and<br>
continuous improvement are essential to enhance the agent's<br>
performance over time.<br>
<br>
--------END Chat ----------<br>
<br>
C2 - Confidential<br>
<br>
The problem is not AI. It is the exhaustive deterministic<br>
relationship<br>
between the undesired losses and the system states. Anything<br>
breaking this, breaks the current safety paradigm. And AI is not the<br>
single thing to<br>
achieve<br>
this.<br>
<br>
AttentionÃffffÃffÃ,¢?sÃfff,Ãff, : due to increased cybersecurity <br>
</blockquote>
</blockquote>
</blockquote>
screening I may<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">receive<br>
external<br>
emails 30 minutes after their emission.<br>
RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert Program Support Manager<br>
Optronics and Defence Division/Customer Support Safran Electronics<br>
& Defense<br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64<br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE<br>
https://urldefense.com/v3/__http://www.safran-electronics-<br>
defense.com__;!!Dl6pPzL6!bg5nEqH6ID136htdDa-<br>
<br>
DgyRz2IZw9arqA9HPKC3p01ZYHoQhzebcyOn5xrgApDNO52A_sLRMk2YgX63j<br>
n5fk7M4$<br>
<br>
-----Message d'origine-----<br>
DeÃffffÃffÃ,¢?sÃfff,Ãff, : systemsafety <systemsafety-<br>
</blockquote>
</blockquote>
</blockquote>
bounces@lists.techfak.uni-<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">bielefeld.de><br>
De la part de Phil Koopman<br>
EnvoyÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©ÃffffÃffÃ,¢?sÃfff,Ãff, : jeudi 22 juin <br>
</blockquote>
</blockquote>
</blockquote>
2023 03:32<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">ÃffffÃff?'ÃfffÃf,Ã,¢ÃffÃ,¢?sÃff,Ãf,Ã,¬ÃffffÃffÃ,¢?sÃfff,Ãff, :<br>
les@chambers.com.au; systemsafety@lists.techfak.uni-<br>
bielefeld.de<br>
ObjetÃffffÃffÃ,¢?sÃfff,Ãff, : Re: [SystemSafety] AI and the virtuous test <br>
</blockquote>
</blockquote>
</blockquote>
Oracle<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite"><br>
CAUTION: This message originated from an outside organization.<br>
In case<br>
of<br>
suspicion, click on "Report to SAFRAN Security" from the Outlook<br>
ribbon.<br>
<br>
Les,<br>
<br>
Since you welcome riffs, I have something that is not as all-<br>
encompassing,<br>
but might have more immediate application.<br>
<br>
I propose that to the degree that "AI" technology is deployed in<br>
a way<br>
that<br>
supplants practical human judgement, the manufacturer of that<br>
system<br>
(in<br>
some cases just the AI part if it is an add-on component) should<br>
be<br>
held<br>
accountable for any action (or inaction) that, if associated<br>
with the<br>
human<br>
that was supplanted, would have constituted negligence.ÃffffÃffÃ,¢?<br>
</blockquote>
</blockquote>
</blockquote>
sÃfff,Ãff,<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">This<br>
should<br>
include<br>
situations in which a human is put in an untenable situation of<br>
supervising<br>
an<br>
AI in a way that puts unreasonable demands upon them, amounting<br>
to a "moral crumple zone"<br>
approach<br>
<br>
(https://urldefense.com/v3/__https://papers.ssrn.com/sol3/papers.cfm<br>
?ab<br>
<br>
stract_id=2757236__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4<br>
gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUivQZX7h$ ).<br>
Liability/negligence if an AI is in substantive control of such<br>
a<br>
situation<br>
should<br>
attach to the manufacturer.<br>
<br>
This leads to a more narrow oracle, but perhaps still useful,<br>
than you propose. If a loss event is caused by a lack of<br>
"reasonable" behavior<br>
by an<br>
AI,<br>
the manufacturer is on the hook for negligence, and the<br>
AI/manufacturer owes a duty of care the same as the human who<br>
was supplanted would<br>
have<br>
owed to whoever/whatever might be affected by that negligence.<br>
It has<br>
the<br>
advantage of reusing existing definitions of "reasonable person"<br>
that<br>
have<br>
been hammered out over decades of law. (To be sure that is not<br>
in the<br>
form<br>
of an engineering specification, but case law has a pretty<br>
robust set<br>
of<br>
precedents, such as crashing into something after your properly<br>
functioning<br>
vehicle ran a red light is likely to lead to the driver being<br>
found<br>
negligent.)<br>
<br>
This does not require the AI to behave the same as people, and<br>
is not a<br>
full<br>
recipe for "safe" AI. But it puts a floor on things in a way<br>
that is<br>
readily<br>
actionable using existing legal mechanisms and theories. If a<br>
reasonable<br>
person would have avoided a harm, any AI that fails to avoid the<br>
harm<br>
would<br>
be negligent.<br>
<br>
I've worked with a lawyer to propose this approach for automated<br>
vehicles,<br>
and it is starting to get some traction. What I write in this<br>
post<br>
(above)<br>
is a<br>
generalization of the concept beyond the narrow automated<br>
vehicle application.<br>
Details here:<br>
<br>
https://urldefense.com/v3/__https://safeautonomy.blogspot.com/2023/0<br>
5<br>
/a-liability-approach-for-<br>
<br>
automated.html__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg<br>
9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUh295b5I$<br>
<br>
-- Phil<br>
<br>
On 6/21/2023 7:14 PM, Les Chambers wrote:<br>
Hi All<br>
<br>
I find myself reflecting on what will become of us.<br>
As systems engineering best practice is overrun by AI.<br>
<br>
Practitioners report that neural networks are eating code.<br>
Example 1: The vector field surrounding a Tesla motor vehicle<br>
is an output of a neural network, not the result of software<br>
logic. Soon<br>
the<br>
neural net - not code - will generate controls. The size of<br>
the code base is reducing. (Elon<br>
Musk)<br>
Example 2: the ChatGPT transformer code base is only 2000 LOC<br>
(Mo Gawdat<br>
https://urldefense.com/v3/__https://youtu.be/bk-<br>
nQ7HF6k4__;!!Dl6pPzL6!<br>
<br>
dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xN<br>
rnA0Av<br>
-DS0QOuOFaUpIawiVG$ )<br>
<br>
The intelligence resides in terabytes of data, perceptrons and<br>
millions of weighting parameters. All are gathered by<br>
automated<br>
means.<br>
Not subject to human review.<br>
<br>
Ergo what will become of our trusty barriers to dangerous<br>
failure:<br>
1. Safety functions - gone<br>
2. Verification - gone<br>
3. Code reviews - gone<br>
4. Validation - How?<br>
<br>
On validation, may I suggest the moral AI. A test oracle built<br>
on a virtuous dataset, capable of interrogating the target<br>
system to determine virtue. Test outcomes will morph from<br>
pass/failure to<br>
moral/immoral.<br>
<br>
Credible industry players have predicted that soon we will<br>
have AIs orders of magnitude smarter than us. Especially when<br>
they start<br>
talking<br>
to<br>
each other.<br>
The bandwidth will be eye-watering - the increase in<br>
intelligence,<br>
vertical.<br>
<br>
New barriers are required. Time to develop an AI that is on<br>
our side<br>
ÃffffÃff,Ãf,Ã,¢ÃfffÃf,Ã,¢ÃffÃ,¢?<br>
</blockquote>
</blockquote>
</blockquote>
sÃff,Ãf,Ã,¬ÃfffÃf,Ã,¢ÃffÃ,¢,Ãf,Ã,¬Ãff."<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">the side of ethics and the moral life. An adult in the room if<br>
you like. We should birth this creature now and raise it as good<br>
parents.<br>
<br>
Let us not panic. May I put the proposition: virtue, like<br>
creativity, can be algorithmic.<br>
I have a sense of starting from the beginning - tabula rasa. I<br>
suggest<br>
that high-level thinking on the subject could begin with<br>
ChatGPT<br>
prompts:<br>
1. What is the stoic philosopherÃffffÃff,Ãf,Ã,¢ÃfffÃf,Ã,¢ÃffÃ,¢?<br>
</blockquote>
</blockquote>
</blockquote>
sÃff,Ãf,Ã,¬ÃfffÃf,Ã,¢ÃffÃ,¢?<br>
<blockquote type="cite">zÃff,Ãf,Ã,¢s concept of<br>
<blockquote type="cite">
<blockquote type="cite">virtue?<br>
2. What are the elements of philosophy relevant to AI?<br>
<br>
Let us not forget our engineering mission: Guardians of the<br>
divine Logos, the organizing principle of the universe,<br>
responsible for its creation, maintenance, and order.<br>
<br>
Would anyone care to riff on this?<br>
<br>
Les<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
systemsengineeringblog.com<br>
<br>
+61 (0)412 648 992<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription:<br>
https://urldefense.com/v3/__https://lists.techfak.uni-<br>
bielefeld.de/mai<br>
<br>
lman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaC<br>
GJCz<br>
dLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-<br>
DS0QOuOFaUjXMdlNF$<br>
<br>
--<br>
Prof. Phil Koopman koopman@cmu.edu<br>
(he/him)<br>
<br>
https://urldefense.com/v3/__https://users.ece.cmu.edu/*koopman/__;fg!<br>
<br>
!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YM<br>
f2lF9o5xNrnA0Av-DS0QOuOFaUnsUrevc$<br>
<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription:<br>
https://urldefense.com/v3/__https://lists.techfak.uni-<br>
<br>
bielefeld.de/mailman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVL<br>
cd<br>
XXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-<br>
DS0QOuOFaUjXMdlNF$<br>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent<br>
contenir des<br>
informations confidentielles, ÃffffÃff?'Ãfff,Ãff,Ãf,Ã,ªtre soumis aux<br>
rÃffffÃff?'Ãfff,Ãff,Ãf,Ã,¨glementations<br>
relatives au<br>
contrÃffffÃff?'Ãfff,Ãff,Ãf,Ã,´le des exportations ou ayant un <br>
</blockquote>
</blockquote>
caractÃffffÃff?'Ãfff,Ãff,Ãf,Ã,¨re<br>
<blockquote type="cite">
<blockquote type="cite">privÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©.<br>
S'ils ne<br>
vous sont<br>
pas destinÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©s, nous vous signalons qu'il est <br>
</blockquote>
</blockquote>
</blockquote>
strictement <br>
<blockquote type="cite">interdit<br>
<blockquote type="cite">
<blockquote type="cite">de les divulguer, de les reproduire ou d'en utiliser de quelque<br>
maniÃffffÃff?'Ãfff,Ãff,Ãf,Ã,¨re que ce<br>
soit<br>
le contenu. Toute exportation ou rÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©exportation <br>
</blockquote>
</blockquote>
</blockquote>
non<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">autorisÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©e<br>
est interdite Si ce message vous a <br>
</blockquote>
</blockquote>
</blockquote>
ÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©tÃffffÃff?'Ãfff,Ãff,Ãf,Ã,© <br>
<blockquote type="cite">transmis par<br>
<blockquote type="cite">
<blockquote type="cite">erreur,<br>
merci d'en informer l'expÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©diteur et de supprimer<br>
immÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©diatement<br>
de votre systÃffffÃff?'Ãfff,Ãff,Ãf,Ã,¨me informatique ce courriel ainsi <br>
</blockquote>
</blockquote>
</blockquote>
que tous <br>
<blockquote type="cite">les<br>
<blockquote type="cite">
<blockquote type="cite">documents qui y sont attachÃffffÃff?'Ãfff,Ãff,Ãf,Ã,©s."<br>
******<br>
" This e-mail and any attached documents may contain confidential<br>
or<br>
proprietary information and may be subject to export control laws<br>
and regulations. If you are not the intended recipient, you are<br>
notified that<br>
any<br>
dissemination, copying of this e-mail and any attachments thereto or<br>
use of their contents by any means whatsoever is strictly<br>
prohibited. Unauthorized export or re-export is prohibited. If you<br>
have received this e-mail in<br>
error,<br>
please advise the sender immediately and delete this e-mail and all<br>
attached<br>
documents from your computer system."<br>
#<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent contenir<br>
des<br>
informations confidentielles, ÃffffÃff,Ãf,Ã,ªtre soumis aux <br>
</blockquote>
</blockquote>
rÃffffÃff,Ãf,Ã,¨glementations<br>
<blockquote type="cite">
<blockquote type="cite">relatives au<br>
contrÃffffÃff,Ãf,Ã,´le des exportations ou ayant un <br>
</blockquote>
</blockquote>
</blockquote>
caractÃffffÃff,Ãf,Ã,¨re <br>
<blockquote type="cite">privÃffffÃff,Ãf,Ã,©. <br>
<blockquote type="cite">S'ils ne<br>
<blockquote type="cite">vous sont<br>
pas destinÃffffÃff,Ãf,Ã,©s, nous vous signalons qu'il est strictement <br>
</blockquote>
</blockquote>
</blockquote>
interdit <br>
<blockquote type="cite">de les<br>
<blockquote type="cite">
<blockquote type="cite">divulguer, de les reproduire ou d'en utiliser de quelque <br>
</blockquote>
</blockquote>
</blockquote>
maniÃffffÃff,Ãf,Ã,¨re <br>
<blockquote type="cite">que ce<br>
<blockquote type="cite">
<blockquote type="cite">soit<br>
le contenu. Toute exportation ou rÃffffÃff,Ãf,Ã,©exportation non <br>
</blockquote>
</blockquote>
autorisÃffffÃff,Ãf,Ã,©e est<br>
<blockquote type="cite">
<blockquote type="cite">interdite Si ce message vous a ÃffffÃff,Ãf,Ã,©tÃffffÃff,Ãf,Ã,© transmis
<br>
</blockquote>
</blockquote>
</blockquote>
par erreur, <br>
<blockquote type="cite">merci <br>
<blockquote type="cite">d'en<br>
<blockquote type="cite">informer l'expÃffffÃff,Ãf,Ã,©diteur et de supprimer <br>
</blockquote>
</blockquote>
</blockquote>
immÃffffÃff,Ãf,Ã,©diatement de <br>
<blockquote type="cite">votre<br>
<blockquote type="cite">
<blockquote type="cite">systÃffffÃff,Ãf,Ã,¨me informatique ce courriel ainsi que tous les
<br>
</blockquote>
</blockquote>
</blockquote>
documents qui <br>
<blockquote type="cite">y <br>
<blockquote type="cite">sont<br>
<blockquote type="cite">attachÃffffÃff,Ãf,Ã,©s."<br>
******<br>
" This e-mail and any attached documents may contain confidential or<br>
proprietary information and may be subject to export control laws and<br>
regulations. If you are not the intended recipient, you are notified that<br>
any<br>
dissemination, copying of this e-mail and any attachments thereto or use <br>
</blockquote>
</blockquote>
</blockquote>
of<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">their contents by any means whatsoever is strictly prohibited.
<br>
</blockquote>
</blockquote>
</blockquote>
Unauthorized<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">export or re-export is prohibited. If you have received this e-mail in<br>
error,<br>
please advise the sender immediately and delete this e-mail and all<br>
attached<br>
documents from your computer system."<br>
#<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent contenir des<br>
informations confidentielles, ÃfffÃf,Ã,ªtre soumis aux <br>
</blockquote>
</blockquote>
</blockquote>
rÃfffÃf,Ã,¨glementations <br>
<blockquote type="cite">relatives <br>
<blockquote type="cite">au<br>
<blockquote type="cite">contrÃfffÃf,Ã,´le des exportations ou ayant un caractÃfffÃf,Ã,¨re
<br>
</blockquote>
</blockquote>
</blockquote>
privÃfffÃf,Ã,©. <br>
<blockquote type="cite">S'ils ne vous <br>
<blockquote type="cite">sont<br>
<blockquote type="cite">pas destinÃfffÃf,Ã,©s, nous vous signalons qu'il est strictement interdit
<br>
</blockquote>
</blockquote>
</blockquote>
de <br>
<blockquote type="cite">les<br>
<blockquote type="cite">
<blockquote type="cite">divulguer, de les reproduire ou d'en utiliser de quelque maniÃfffÃf,Ã,¨re
<br>
</blockquote>
</blockquote>
</blockquote>
que <br>
<blockquote type="cite">ce <br>
<blockquote type="cite">soit<br>
<blockquote type="cite">le contenu. Toute exportation ou rÃfffÃf,Ã,©exportation non
<br>
</blockquote>
</blockquote>
</blockquote>
autorisÃfffÃf,Ã,©e <br>
<blockquote type="cite">est <br>
<blockquote type="cite">interdite Si<br>
<blockquote type="cite">ce message vous a ÃfffÃf,Ã,©tÃfffÃf,Ã,© transmis par erreur, merci d'en
<br>
</blockquote>
</blockquote>
</blockquote>
informer <br>
<blockquote type="cite">
<blockquote type="cite">l'expÃfffÃf,Ã,©diteur<br>
<blockquote type="cite">et de supprimer immÃfffÃf,Ã,©diatement de votre systÃfffÃf,Ã,¨me
<br>
</blockquote>
</blockquote>
</blockquote>
informatique ce <br>
<blockquote type="cite">courriel <br>
<blockquote type="cite">ainsi<br>
<blockquote type="cite">que tous les documents qui y sont attachÃfffÃf,Ã,©s."<br>
******<br>
" This e-mail and any attached documents may contain confidential or<br>
proprietary information and may be subject to export control laws and<br>
regulations. If you are not the intended recipient, you are notified that <br>
</blockquote>
</blockquote>
any<br>
<blockquote type="cite">
<blockquote type="cite">dissemination, copying of this e-mail and any attachments thereto or use
<br>
</blockquote>
</blockquote>
</blockquote>
of<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">their contents by any means whatsoever is strictly prohibited.
<br>
</blockquote>
</blockquote>
</blockquote>
Unauthorized<br>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">export or re-export is prohibited. If you have received this e-mail in
<br>
</blockquote>
</blockquote>
error,<br>
<blockquote type="cite">
<blockquote type="cite">please advise the sender immediately and delete this e-mail and all
<br>
</blockquote>
</blockquote>
attached<br>
<blockquote type="cite">
<blockquote type="cite">documents from your computer system."<br>
#<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription: https://lists.techfak.uni-<br>
</blockquote>
bielefeld.de/mailman/listinfo/systemsafety<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
</blockquote>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
</blockquote>
<br>
<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
<br>
</div>
</div>
</div>
<br>
</div>
</body>
</html>