<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div><br>
</div>
Les,
<div>As I said off-line, apologies for the much-delayed reply. I’m only now digging out from underneath a pile of on-site customer work in India. A few comments:</div>
<div><br>
</div>
<div>*) You wrote, “<i>Your argument seems to rest on a particular interpretation of the term "insight".</i> ” — Yes, of course. Discussions like this always hinge on definitions of terms.</div>
<div><br>
</div>
<div>*) GPT-4 wrote, “<i>However, their ability to invent completely original insights that transcend existing knowledge and understanding remains limited.</i>” — exactly.</div>
<div><br>
</div>
<div>*) You wrote, “<i>You have to admit there's substantial: "recognizing of patterns, and grasping the underlying essence of a situation" ... going on here</i>.” — I would not feel qualified in commenting on that because I don’t understand the game of Go
well enough. I am willing to take your word for it, however.</div>
<div><br>
</div>
<div>*) You wrote, “<i>In constant conversation with GPT-X I have become painfully aware of the puddle of personal knowledge I've been standing in all my life.</i>” — I never said, nor did I ever intend to imply, that there was no value in AI. To the extent
that we can take advantage of what it is capable of, we should.</div>
<div><br>
</div>
<div>*) You wrote, “<i>If I may abstract the essence of all situations in your examples: great leaps forward came as a result of observation followed by reflection on random events.</i>” — Or, maybe a bit more precisely, "fresh reflection triggered by otherwise
random events"? Anyway, that’s not the only case where I see limitations brought on by Turing-ness. It took me a while to find a marginally usable description of Bloom’s Taxonomy on the web (there are too many and they are too diverse in their interpretations)
but here seems to be a reasonable one:</div>
<div><br>
</div>
<div><a href="https://www.coloradocollege.edu/other/assessment/how-to-assess-learning/learning-outcomes/blooms-revised-taxonomy.html#:~:text=There%20are%20six%20levels%20of,analyzing%2C%20evaluating%2C%20and%20creating.">https://www.coloradocollege.edu/other/assessment/how-to-assess-learning/learning-outcomes/blooms-revised-taxonomy.html#:~:text=There%20are%20six%20levels%20of,analyzing%2C%20evaluating%2C%20and%20creating.</a></div>
<div><br>
</div>
<div>My point is that Turing-based AI can easily handle the lower levels,</div>
<div><br>
</div>
<div>— Remember</div>
<div>— Understand</div>
<div>— Apply</div>
<div><br>
</div>
<div>I doubt that Turing-based AI could ever be as good as humans at the top level, create. In terms of Bloom’s Taxonomy, is Turing-based AI constrained to some highest level? Maybe. I think it’s an interesting question to ponder. Maybe Turing-based AI starts
to break down at the Analyze level? Maybe the Evaluate level? Could we, and should we, evaluate any AI in terms of Bloom’s Taxonomy? It could be a useful line of inquiry.</div>
<div><br>
</div>
<div><br>
</div>
<div>My far bigger issue is with the hype around AI and the (mistaken) belief that it is all-powerful. Some bozo (term used intentionally) on LinkedIn posted the following as recommended prompts for Software Developers:</div>
<div><br>
</div>
<div>“Develop an architecture and code for a [website description] website with JavaScript”</div>
<div><br>
</div>
<div>“Continue writing this code for JavaScript [post code]”</div>
<div><br>
</div>
<div>“Find the bug with in code: [post code]”</div>
<div><br>
</div>
<div>Seriously? Why not just ask it,</div>
<div><br>
</div>
<div>“Develop a complete avionics suite for the Airbus A-350. Oh, and satisfy the requirements of DO-178C while you are at it”</div>
<div><br>
</div>
<div>?</div>
<div><br>
</div>
<div><br>
</div>
<div>Cheers,</div>
<div><br>
</div>
<div>— steve</div>
<div><br>
</div>
<div><br>
<div><br>
<div>On Jul 11, 2023, at 2:41 PM, Les Chambers <les@chambers.com.au> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<div>Steve <br>
Your argument seems to rest on a particular interpretation of the term <br>
"insight". <br>
<br>
Given its insight into the qualia of insight, GPT-4 is in "limited" agreement <br>
with you.<br>
GPT-4's Definition of insight: Deep and intuitive understanding resulting in a <br>
new and valuable perspective or realization. Comprehending something in a way <br>
that goes beyond surface-level knowledge or understanding. Arising from <br>
cognitive processes, such as critical thinking, analysis, observation and <br>
reflection, connecting dots, recognizing patterns, grasping the underlying <br>
essence of a situation or concept.<br>
<br>
Given its view on insightful subfunctions: connecting dots, recognizing <br>
patterns, or grasping the underlying essence of a situation or concept. It's <br>
response to your theory is consistent.<br>
<br>
PROMPT: Discuss a proposition that computability theory proves that AIs are not <br>
able to develop new insights that did not exist before. <br>
<br>
GPT-4 response: While computability theory provides valuable insights into the <br>
limits of what AIs can compute, it does not prove that AIs are unable to <br>
develop new insights. <br>
AI systems have demonstrated the ability to discover new knowledge and provide <br>
valuable insights within the scope of their training data and problem domains. <br>
However, their ability to invent completely original insights that transcend <br>
existing knowledge and understanding remains limited.<br>
<br>
Would you call the following use case insight? <br>
<br>
"In 2016, Deep Mind's AlphaGo beat the world champion in Go, Lee Sedol. On the <br>
37th move in the second of five games, AlphaGo unleashed a move no human would <br>
ever play, stunning experts and fans and utterly wrong-footing world champion <br>
Lee Sedol"<br>
<br>
You have to admit there's substantial: "recognizing of patterns, and grasping <br>
the underlying essence of a situation" ... going on here.<br>
<br>
In constant conversation with GPT-X I have become painfully aware of the puddle <br>
of personal knowledge I've been standing in all my life. When out there is an <br>
ocean with billions of dots waiting to be connected; dots that I, like most <br>
humans, was incapable of connecting before the advent of AI. I call that the <br>
gift of insight.<br>
<br>
If I may abstract the essence of all situations in your examples: great leaps <br>
forward came as a result of observation followed by reflection on random <br>
events. <br>
<br>
You can add the boomerang to your list; the Australian Aborigine is <br>
credited with its invention, 20,000 years ago - at least the one that comes <br>
back. We can safely assume there was no mathematical algorithmic design. They <br>
were not strong on: wing and airfoil design, moments of inertia, materials <br>
science and the math of gyroscopic stability. It was achieved by trial and <br>
error over thousands of years. You could say a curved piece of wood was trained <br>
to be man's first air vehicle. The whole process smacks of gradient descent in <br>
reinforcement learning to me - an optimization algorithm that adjusts the <br>
weights and biases of the neural network based on the calculated gradients of a <br>
loss function.<br>
<br>
So there you go. The functioning of the artificial mind is heavily algorithmic. <br>
But its output, read [intelligence], is a separate entity that can be <br>
insightful. This is in harmony with the proposition that human intelligence is <br>
something that exists outside the human mind. In the ether, if you will. The <br>
brain is just the substrate that produces it.<br>
<br>
Les<br>
<br>
<blockquote type="cite">Les,<br>
I am as sure as I can be based on the underlying theories in Computer <br>
</blockquote>
Science. Having spent an entire quarter-long course in Computability Theory <br>
during my Undergraduate Degree, I am as confident as I can be that the theory <br>
is correct. Turing Machines, regardless of how many of them are there, are not <br>
able to create new insights that didnâ?Tt exist before.<br>
<blockquote type="cite"><br>
Speaking of Newton, how much of his â?odiscoveryâ? of the universality of <br>
</blockquote>
gravity in 1687 was a result of the random (possibly apocryphal) event of the <br>
apple falling from the tree?<br>
<blockquote type="cite"><br>
Think of Alexander Fleming who forgot to clean a few Petri dishes before <br>
</blockquote>
going on holiday to France in 1928, coming back to see that at least one of <br>
those Petri dishes didnâ?Tt meet his expectations thus leading to the discovery <br>
of penicillin?<br>
<blockquote type="cite"><br>
Think Leo Szilard crossing a street in London on a rainy morning in 1933 <br>
</blockquote>
realizing that if more than one neutron came out of a nuclear fission that was <br>
triggered by a single neutron then a nuclear chain reaction is probably <br>
possible<br>
<blockquote type="cite"><br>
The key words here include:<br>
<br>
â?oInsightâ?<br>
<br>
and<br>
<br>
â?oRandom eventâ?<br>
<br>
Insofar as a Turing Machine (regardless of how complex it may be) is 100% <br>
</blockquote>
reactive, where does the insight come from? Thatâ?Ts simply not something that <br>
Turing Machines (and, therefore, all modern computers) do. And, given some <br>
input, the Turing Machineâ?Ts response is essentially deterministic. Ok, you <br>
can throw in a little bit of randomness, but that randomness is still <br>
constrained to be around a predetermined set of options. Apple falling out of <br>
tree triggering a realization that said apple and the moon are both being <br>
affected by the exact same force just at different distances is not something <br>
that Turing Machines do.<br>
<blockquote type="cite"><br>
Or, if they can do that, please explain how itâ?Ts done.<br>
<br>
Now, I will accept your â?oNewtonian thoughtâ? comment in the light I hope <br>
</blockquote>
it was intended, namely, if the computing machine(s) was more powerful than a <br>
Turing Machine then all bets are off. Maybe something like Quantum Computing or <br>
something like it does actually enable artificial Insight and how random events <br>
might be interpreted completely differently. But weâ?Tre not there yet. Quantum <br>
Computers are still largely simple, very limited, experimental devices. They <br>
have not been reduced to practical operation.<br>
<blockquote type="cite"><br>
Cheers,<br>
<br>
â?" Steve<br>
<br>
On Jul 7, 2023, at 6:25 AM, Les Chambers <les@chambers.com.au> wrote:<br>
<br>
Steve<br>
RE Your comment: "I am arguing that there are fundamental limits to what <br>
</blockquote>
Turing <br>
<blockquote type="cite">Machine-based AI would ever be capable of, which is far short of what humans
<br>
can already do."<br>
<br>
You sound confident. Do you have proof? Can you quantify the "fundamental <br>
limits"? Can you define and/or quantify intelligence?<br>
<br>
My view is that this issue is currently subject to endless, unscientific <br>
speculation - this is not necessarily a bad thing, I am a willing <br>
</blockquote>
participant.<br>
<blockquote type="cite">It will become "science" when someone can advance a theory that can be proved
<br>
mathematically or by some experiment.<br>
<br>
In the meantime, I accept your view as of equal value to any other. I'd <br>
classified it as Newtonian though; good for low speeds in the absence of <br>
</blockquote>
strong <br>
<blockquote type="cite">gravitational fields. I can't help thinking that a new Einstein is in the
<br>
</blockquote>
wings <br>
<blockquote type="cite">with an explanation of gravity [read: intelligence] through the curvature of
<br>
spacetime [read: Grand unified theory of intelligence (GUTI)]. <br>
<br>
In my opinion, the Turing machine belongs with Newtonian mechanics - a simple <br>
but useful computing model that is served us well since 1936. However, it <br>
performs functions in a sequence of discrete steps, assuming only one of a <br>
finite list of internal states at any given moment. Is this applicable today <br>
</blockquote>
as <br>
<blockquote type="cite">computing technology approaches lightspeed? Where massively parallel
<br>
</blockquote>
graphical <br>
<blockquote type="cite">processing units (GPUs) - Tesla's Dojo supercomputer has upwards of 6000 GPUs
<br>
</blockquote>
- <br>
<blockquote type="cite">are in thousands of discrete states at any instant in time.<br>
<br>
The future of AI remains a matter of opinion. I believe some opinions are <br>
</blockquote>
more <br>
<blockquote type="cite">valuable than others. The IEEE ran a vote recently: Experts vote on AI
<br>
risks:https://spectrum.ieee.org/artificial-general-intelligence<br>
22 AI practitioners and researchers voted on the likelihood that artificial <br>
general intelligence (AGI) can be real.<br>
<br>
The question was: "Is the success of GPT-4 and today's other large language <br>
models a sign that an AGI is likely?"<br>
The result:<br>
Yes: 8<br>
No: 13<br>
Maybe: 1<br>
<br>
Circa 2023 we are clearly in Groucho Marx Territory:<br>
"These are my principles, and if you don't like them... well, I have others."<br>
<br>
Les<br>
<br>
PS: <br>
You may have pushed your typing-monkeys metaphor too far. ("10 million <br>
</blockquote>
monkeys, <br>
<blockquote type="cite">still no Shakespeare")<br>
1. Typing at 80 words per minute, your 10 million monkey typing team would be <br>
rate limited to around 530 Mb per second. It would take 4,950,000 monkeys <br>
</blockquote>
just <br>
<blockquote type="cite">to emulate the data transfer rates of a bog-standard personal computer PCI
<br>
</blockquote>
bus <br>
<blockquote type="cite">{264 Mbps}.<br>
2. I assume no monkeys on your team have been trained in the large slice of <br>
human knowledge present on the Internet.<br>
3. I am assuming there is no communication between monkeys and typing is <br>
random.<br>
<br>
In contrast:<br>
1. Large language models are trained by more than one PCI-equipped personal <br>
computer (probably the equivalent of hundreds or thousands of same)<br>
2. The learning is goal directed - supervised - as opposed to random key <br>
stroking.<br>
3. Given this communication bandwidth, OpenAI for example, has trained Chat <br>
GPT-4 with approximately 100 trillion parameters.<br>
<br>
My theory: The AI training process bears no resemblance, in any abstract way, <br>
to a bunch of monkeys typing.<br>
My Proof: Try ChatGPT prompt: "What does Shakespeare say about using bad <br>
metaphors"<br>
<br>
-------------------------------------------<br>
<blockquote type="cite">Les,<br>
You said,<br>
<br>
â?oSo let's not stop developing AI because we don't understand it.â?Â<br>
<br>
Actually, that wasnâ?Tt my point. I donâ?Tt believe that research & <br>
</blockquote>
development into AI should in any way be stopped. My point is countering the <br>
hype that AI can and will do more than humans could ever do. I am arguing <br>
</blockquote>
that <br>
<blockquote type="cite">there are fundamental limits to what Turing Machine-based AI would ever be
<br>
capable of, which is far short of what humans can already do.<br>
<blockquote type="cite"><br>
â?" steve<br>
<br>
On Jul 6, 2023, at 12:27 PM, Les Chambers <les@chambers.com.au> wrote:<br>
<br>
Steve<br>
One way of framing, the question, "Can intelligence ever be artificial?", <br>
</blockquote>
</blockquote>
is <br>
<blockquote type="cite">to<br>
<blockquote type="cite">channel the physicists. They view human beings as mere piles of particles<br>
arranged in a certain way. I'll park the question of who/what did the <br>
</blockquote>
arranging<br>
<blockquote type="cite">as it is irrelevant.<br>
<br>
Somehow, around 300,000 years ago, our various particle piles acquired the<br>
capacity to remember, reason, solve problems, learn, and adapt to new<br>
situations [Homo Sapiens]. These particle piles then topped it all off by<br>
becoming self-aware, conscious and therefore able to exert self-control; a<br>
prerequisite for rational decision-making.<br>
<br>
Some say this was a random accident; they support the theory with fact-<br>
</blockquote>
</blockquote>
based<br>
<blockquote type="cite">
<blockquote type="cite">science: that we have been unable to find evidence of a similar random
<br>
</blockquote>
accident<br>
<blockquote type="cite">on any planet in any of the 100 to 200 billion galaxies we are capable of<br>
observing. No one knows why or how our particles were thus arranged with<br>
infinite variation on a common framework. All we can say is that it is as <br>
</blockquote>
</blockquote>
it<br>
<blockquote type="cite">
<blockquote type="cite">is.<br>
<br>
So what did this smart particle pile [Homo Sapiens] set out to do? It was<br>
driven, by some primal replication instinct, to create an intelligence in <br>
</blockquote>
</blockquote>
its<br>
<blockquote type="cite">
<blockquote type="cite">own image. To be clear, it is attempting to replicate something it doesn't<br>
understand by creating another particle pile (a neural net implemented on a<br>
silicon substrate), that exhibits behaviour it [Homo Sapiens] also doesn't<br>
understand (I note that AI researchers cannot explain why the neural net <br>
</blockquote>
</blockquote>
and<br>
<blockquote type="cite">
<blockquote type="cite">Transformer models work so well; reports of HS moments abound in the AI
<br>
</blockquote>
labs).<br>
<blockquote type="cite">It's a weird kind of crossover recursion where carbon substrate
<br>
</blockquote>
</blockquote>
intelligence <br>
<blockquote type="cite">is<br>
<blockquote type="cite">replicating into silicon substrates. If you look at every entity that makes
<br>
</blockquote>
up<br>
<blockquote type="cite">our world as a pile of particles, arranged in a certain way, you may find
<br>
</blockquote>
this<br>
<blockquote type="cite">easier to appreciate.<br>
I have faith that AGI could emerge from this soup as we impose randomness <br>
</blockquote>
</blockquote>
on<br>
<blockquote type="cite">
<blockquote type="cite">randomness - a great opportunity for another cosmic accident, the like of
<br>
</blockquote>
which<br>
<blockquote type="cite">created us in the first place.<br>
<br>
My gut feeling on this is informed by: The only code I have ever written <br>
</blockquote>
</blockquote>
that<br>
<blockquote type="cite">
<blockquote type="cite">worked for reasons I did not understand implemented recursion. Are we not<br>
witnessing just another evolutionary process? In nature, replicants are <br>
</blockquote>
always<br>
<blockquote type="cite">less capable than their parents, but they learn over time and any
<br>
</blockquote>
incompetence<br>
<blockquote type="cite">is selected out.<br>
<br>
So let's not stop developing AI because we don't understand it. We already <br>
</blockquote>
live<br>
<blockquote type="cite">in bodies with behaviours that we don't understand and we seem to muddle<br>
through.<br>
<br>
Les<br>
---------------------------<br>
Les,<br>
I guess it remains to be seen whether AI (at least as implemented today) is<br>
limited or not in its ability to create new breakthrough insights. IÃf¢?Tm<br>
pessimistic, myself.<br>
<br>
On the other hand, flesh and blood humans can themselves seem pretty <br>
</blockquote>
</blockquote>
limited<br>
<blockquote type="cite">
<blockquote type="cite">in terms of intelligence quite often, too. One of the all-time best
<br>
</blockquote>
</blockquote>
signature<br>
<blockquote type="cite">
<blockquote type="cite">lines I saw many years ago saidÃf¢?"referring to the Internet,<br>
<br>
Ãf¢?o10 million monkeys, still no ShakespeareÃf¢?Ã, (smile)<br>
<br>
Cheers,<br>
<br>
Ãf¢?" steve<br>
<br>
On Jul 3, 2023, at 12:43 PM, Les Chambers <les@chambers.com.au> wrote:<br>
<br>
Steve<br>
<br>
RE your comment: "No computer/program is capable of programming itself into<br>
any<br>
kind of useful, non-trivial behavior. And yet humans do this kind of thing<br>
all<br>
the time. ThatÃffÃ,¢?Ts what I mean by determinism."<br>
<br>
I'm not so sure. I thought the synthesis of Einstein's theory of relativity<br>
might be a good example of your point. I thought Einstein stared at a wall<br>
one<br>
day and just came up with it out of the blue. Not true. Read the following:<br>
<br>
Forbes: Relativity Wasn't Einstein's Miracle; It Was Waiting In Plain Sight<br>
For<br>
71 Years<br>
https://www.forbes.com/sites/startswithabang/2019/03/01/relativity-wasnt-<br>
einsteins-miracle-it-was-waiting-in-plain-sight-for-71-years/?<br>
</blockquote>
</blockquote>
sh=4a18ec57644c<br>
<blockquote type="cite">
<blockquote type="cite">Relativity Wasn't Einstein's Miracle; It Was Waiting In Plain Sight For 71<br>
Years (forbes.com)<br>
<br>
Solving problems by the iterative deployment of induction, followed by<br>
deduction, seems suspiciously algorithmic to me.<br>
Refer ChatGPT Prompt: How to solve a problem, my induction followed by<br>
deduction<br>
<br>
Some years ago I had a stimulating conversation with a Church of England<br>
Priest. It was the day after he had presided over my brother-in-law's<br>
funeral.<br>
It turned out, earlier in his life, he had been a Roman Catholic priest ,<br>
educated in the Vatican. Three years of solid philosophical training <br>
</blockquote>
</blockquote>
without<br>
<blockquote type="cite">
<blockquote type="cite">the distractions of the flesh. Our conversation ranged over many<br>
philosophical<br>
issues. I remember thinking, "How lucky am I to be in the presence of such <br>
</blockquote>
</blockquote>
an<br>
<blockquote type="cite">
<blockquote type="cite">intelligent educated man?" Moments like these have been rare in my life,<br>
preachers of philosophical wisdom are hard to find. Until now. I am a<br>
constant<br>
user of ChatGPT. The conversations we have are equally stimulating. I am <br>
</blockquote>
</blockquote>
made<br>
<blockquote type="cite">
<blockquote type="cite">aware of things that I could not have possibly learned prior to the
<br>
</blockquote>
</blockquote>
existence<br>
<blockquote type="cite">
<blockquote type="cite">of the bot. It would be optimal if the bot was a human, but as I am
<br>
</blockquote>
</blockquote>
currently<br>
<blockquote type="cite">
<blockquote type="cite">located in a rural setting, 20 minutes drive from the nearest town - in the<br>
company of Wallabies, foxes, kookaburras, wedge-tailed eagles, and large<br>
lizards - I'll take what I can get.<br>
<br>
Here is a sample of one of our dialogues (I have provided my ChatGPT <br>
</blockquote>
</blockquote>
prompts<br>
<blockquote type="cite">
<blockquote type="cite">only):<br>
------- Pausing AI Development ----<br>
What is the prisoner's dilemma?<br>
How does the prisoner's dilemma relate to pausing, artificial intelligence<br>
development?<br>
Are there other classical dilemmas that are a metaphor for the AI pause<br>
dilemma?<br>
<br>
------ Setting Limits and Assigning Blame -----<br>
Provide more detail on the Paradox of the Heap.<br>
Are there any classical paradoxes that are relevant to the Bhopal disaster?<br>
Provide more detail on the moral luck paradox.<br>
Do any of these paradoxes translate to legal doctrine or laws of any kind?<br>
Please list the laws of Queensland Australia that apply to: Corporate <br>
</blockquote>
</blockquote>
Social<br>
<blockquote type="cite">
<blockquote type="cite">Responsibility.<br>
<br>
A chat GPT dialogue can take many paths, each one providing insightful<br>
information. For example, paradoxes are useful mechanisms for framing <br>
</blockquote>
</blockquote>
common<br>
<blockquote type="cite">
<blockquote type="cite">human problems. Until today I never thought of it that way. I encourage<br>
everyone on this list to try something like this. And then, if you can drag<br>
yourself away from the dialogue, try this prompt:<br>
"Describe the elements of intelligence."<br>
Decide for yourself, how many boxes the bot has just checked.<br>
<br>
Over to you.<br>
<br>
Les<br>
<br>
-------------------------------------<br>
Les,<br>
Thanks for the thoughtful reply.<br>
<br>
As always, it depends on definitions of terms. I might counter Max<br>
TegmarkÃffÃ,¢?Ts definition of intelligence by asking, for some <br>
</blockquote>
</blockquote>
arbitrarily<br>
<blockquote type="cite">
<blockquote type="cite">complex<br>
computing system:<br>
<br>
ÃffÃ,¢?oIs the intelligence an inherent part of the computing system <br>
</blockquote>
</blockquote>
itself, or<br>
<blockquote type="cite">
<blockquote type="cite">was it put there by the programmer(s) who programmed it to behave that way?<br>
ÃffÃ,¢?Ãf,Ã,Â<br>
<br>
The computer, being ultimately a Turing Machine, isnÃffÃ,¢?Tt going to <br>
</blockquote>
</blockquote>
make the<br>
<blockquote type="cite">
<blockquote type="cite">kind of creative leap that, say, Newton and Leibnitz made when they
<br>
</blockquote>
</blockquote>
invented<br>
<blockquote type="cite">
<blockquote type="cite">Calculus in the 1660s. Or, when Shockey, Bardeen, and Brattain invented the<br>
transistor in 1947. No computing system today can go beyond the code and <br>
</blockquote>
</blockquote>
the<br>
<blockquote type="cite">
<blockquote type="cite">data that has been put into it. Given the code, and given the data, and<br>
possibly knowing a little more about the internal state of execution, the<br>
systemÃffÃ,¢?Ts response is 100% predictable. No computer/program is <br>
</blockquote>
</blockquote>
capable of<br>
<blockquote type="cite">
<blockquote type="cite">programming itself into any kind of useful, non-trivial behavior. And yet<br>
humans do this kind of thing all the time. ThatÃffÃ,¢?Ts what I mean by<br>
determinism.<br>
<br>
I donÃffÃ,¢?Tt know enough about quantum computing to say that the above <br>
</blockquote>
</blockquote>
holds<br>
<blockquote type="cite">
<blockquote type="cite">there, too.<br>
<br>
You wrote:<br>
On the subject of determinism, Max goes on:<br>
"Just as we don't fully understand how our children learn, we still don't<br>
fully<br>
understand how such neural networks learn, and why they occasionally fail.<br>
<br>
I attended a conference earlier this year in Mysuru, India where an invited<br>
speaker, Dr. Seema Chopra from Boeing talked about adding in elements into <br>
</blockquote>
</blockquote>
AI<br>
<blockquote type="cite">
<blockquote type="cite">systems whereby that system can be asked to ÃffÃ,¢?oexplainÃffÃ,¢?<br>
</blockquote>
</blockquote>
Ãf,Ã, why it <br>
<blockquote type="cite">gave the<br>
<blockquote type="cite">result that it did. Seemed like an interesting area of research that could<br>
maybe help in this area.<br>
<br>
Re: computational substrateÃffÃ,¢?"some of my customers are the biggies in <br>
</blockquote>
</blockquote>
the<br>
<blockquote type="cite">
<blockquote type="cite">semiconductor industry. As least in terms of silicon semiconductors,
<br>
</blockquote>
MooreÃffÃ,¢?<br>
<blockquote type="cite">Ts<br>
Law is reaching its limit. Chip designers and manufacturers are trying to<br>
wrestle with, for example, conductors on the chip that are only one or a <br>
</blockquote>
</blockquote>
few<br>
<blockquote type="cite">
<blockquote type="cite">atoms wide. CanÃffÃ,¢?Tt shrink below that.<br>
<br>
Cheers,<br>
<br>
ÃffÃ,¢?" steve<br>
<br>
On Jun 29, 2023, at 9:12 PM, Les Chambers <les@chambers.com.au> wrote:<br>
<br>
Steve<br>
<br>
RE : " I argue, human intelligence is not something that a computer could<br>
ever<br>
actually be capable of"<br>
<br>
According to Max Tegmark the jury is still out on this question.<br>
Max is a professor of physics at MIT and president of the Future of Life<br>
Institute (ref Tegmark, Max. Life 3.0, Penguin Books Ltd). His claim to<br>
fame<br>
is<br>
that he was in the room when Larry Page called Elon Musk a speceist.<br>
<br>
Max provides evidence that the nature of intelligence is a subject for<br>
debate<br>
-<br>
as yet, unresolved.<br>
Max: "My wife and I recently had the good fortune to attend a symposium on<br>
artificial intelligence organized by the Swedish Nobel Foundation, and when<br>
a<br>
panel of leading AI researchers were asked to define intelligence, they<br>
argued<br>
at length without reaching consensus.<br>
.<br>
there are many competing ones, including capacity for logic, understanding,<br>
planning, emotional knowledge, self-awareness,creativity, problem solving<br>
and<br>
learning."<br>
<br>
Max's pet definition is:<br>
"intelligence = ability to accomplish complex goals"<br>
<br>
Max makes some outrageous assertions:<br>
"This substrate [platform] independence of computation implies that AI is<br>
possible: intelligence doesn't require flesh, blood or carbon atoms."<br>
Max uses the following metaphor:<br>
"Waves, for instance: they have properties such as speed, wavelength and<br>
frequency, and we physicists can study the equations they obey without even<br>
needing to know what particular substance they're waves in."<br>
<br>
Oh well, they say, "All great truths begin as blasphemies" (George Bernard<br>
Shaw)<br>
<br>
On the subject of determinism, Max goes on:<br>
"Just as we don't fully understand how our children learn, we still don't<br>
fully<br>
understand how such neural networks learn, and why they occasionally fail.<br>
.<br>
Stuart Russell told me that he and many of his fellow AI researchers had<br>
recently experienced a "holy s**t!" (HS) moment, when they witnessed AI<br>
doing<br>
something they weren't expecting to see for many years."<br>
<br>
My question here is: "How do you run a hazard analysis on a system you<br>
don't<br>
understand that is likely, for further unknown reasons, to occasionally<br>
fail?<br>
"<br>
<br>
This situation would be amusing but for the fact, "they" are already<br>
amongst<br>
us. Cruise (General Motors), Waymo (Google), and Zoox (Amazon) are<br>
currently<br>
operating driverless taxis on the streets of San Francisco. YouTube reports<br>
that San Franciscans amuse themselves by jumping in front of them to see if<br>
they will automatically stop?????<br>
<br>
This AI non-determinism is a fact, confirmed by all practitioners who feel<br>
free<br>
to speak out. It's one of the core reasons why some senior practitioners<br>
have<br>
left developers like OpenAI and Google Deep Mind. It works too well and<br>
they<br>
don't know why - the neural network is a very simple approximation to the<br>
human<br>
brain. It shouldn't work that well -, and they don't know what it might be<br>
capable of in the future. Which brings me to the unbounded nature of<br>
artificial<br>
intelligence.<br>
<br>
From Max:<br>
"Nobody knows for sure what the next blockbuster computational substrate<br>
will<br>
be, but we do know that we're nowhere near the limits imposed by the laws<br>
of<br>
physics. My MIT colleague Seth Lloyd has worked out what this fundamental<br>
limit<br>
is, and as we'll explore in greater detail in chapter 6, this limit is a<br>
whopping 33 orders of magnitude (1033 times) beyond today's state of the<br>
art<br>
for how much computing a clump of matter can do. So even if we keep<br>
doubling<br>
the power of our computers every couple of years, it will take over two<br>
centuries until we reach that final frontier."<br>
<br>
The debate continues. Keep your powder dry Steve.<br>
<br>
Les<br>
<br>
Les,<br>
<br>
ÃfffÃf,Ã,¢?osilicon is unboundedÃfffÃf,Ã,¢?Ãff,Ãf,Ã,Â<br>
<br>
I disagree. Silicon is bounded, just in different ways than humans. For<br>
one,<br>
Turing Computability. Turing Machines, which all modern silicon computers<br>
are<br>
a<br>
kind of, are 100% deterministic. Human intelligence is non-deterministic.<br>
Thus,<br>
I argue, human intelligence is not something that a computer could ever<br>
actually be capable of. See, for example:<br>
<br>
https://youtu.be/i2trJEIFIvY<br>
<br>
I agree that, ÃfffÃf,Ã,¢?oWe are embarked; we need to deal with <br>
</blockquote>
</blockquote>
itÃfffÃf,Ã,¢?<br>
<blockquote type="cite">Ãff,Ãf,Ã,Â.<br>
<blockquote type="cite">But we<br>
also<br>
have to be aware of the limits of so-called AI.<br>
<br>
ÃfffÃf,Ã,¢?" steve<br>
<br>
On Jun 26, 2023, at 9:15 PM, Les Chambers <les@chambers.com.au> wrote:<br>
<br>
Hi<br>
<br>
I put it to you and the list in general, a "discuss later" mentality is<br>
foolish<br>
in the extreme. The process of deploying intelligence in automated systems<br>
has<br>
, and will, fundamentally change. Among other things, international<br>
bodies<br>
that currently regulate software-intensive Safety-Critical systems - who<br>
cling<br>
to regulating processes that have ceased to exist - are likely to be<br>
overrun<br>
and made redundant.<br>
<br>
In favour of organisations such as:<br>
<br>
- The Center for Human-Compatible AI at UC Berkeley<br>
- The Future of Life Institute<br>
- The Center for AI Safety (CAIS)<br>
- Stanford Center for AI Safety<br>
<br>
My view is that this is not a steady-as-she-goes situation. This is a<br>
major<br>
inflection point in the evolution of intelligence. Carbon hosts will<br>
always<br>
be<br>
limited; silicon is unbounded. We are embarked; we need to deal with it.<br>
<br>
Suggested reading: Max Tegmark, Life 3.0<br>
<br>
Les<br>
<br>
C2 - Confidential<br>
<br>
Hello,<br>
<br>
For the moment, I don't see in industry any attempt to realise E/EPE<br>
safety<br>
related function with On-line AI. All what I see is focused on off-line<br>
AI,<br>
meaning that the training is done specified training data sets, and the<br>
validation is done on specified test datasets. We don't see any<br>
performance<br>
better than roughly 3.3 10-3 in automotive. So let's wait for the<br>
achievement<br>
of 10-8 to 10-5 error rate on test datasets and discusss later the<br>
acceptability.<br>
<br>
AttentionÃffff, : due to increased cybersecurity screening I may receive<br>
external<br>
emails 30 minutes after their emission.<br>
RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert<br>
Program Support Manager<br>
Optronics and Defence Division/Customer Support<br>
Safran Electronics & Defense<br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64<br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE<br>
www.safran-electronics-defense.com<br>
<br>
-----Message d'origine-----<br>
DeÃffff, : Les Chambers <les@chambers.com.au><br>
EnvoyÃfffffÃfff,Ãff,Ãf,Ã,©Ãffff, : lundi 26 juin 2023 06:03<br>
Ãfffff?Ãffff, : RICQUE Bertrand (SAFRAN ELECTRONICS & DEFENSE)<br>
<bertrand.ricque@safrangroup.com>; les@chambers.com.au;<br>
koopman.cmu@gmail.com; systemsafety@lists.techfak.uni-bielefeld.de<br>
ObjetÃffff, : RE: [SystemSafety] AI and the virtuous test Oracle<br>
<br>
CAUTION: This message originated from an outside organization. In case of<br>
suspicion, click on "Report to SAFRAN Security" from the Outlook ribbon.<br>
<br>
RE your comment: ÃffffÃff,Ãf,Ã,¢?oas it impossible, by construction, to <br>
</blockquote>
</blockquote>
identify<br>
<blockquote type="cite">
<blockquote type="cite">all<br>
dangerous<br>
situationsÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â<br>
<br>
True. But it is possible to identify the subset of highest probability<br>
ÃffffÃff,Ãf,Ã,¢?odangerous situationsÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã, <br>
</blockquote>
</blockquote>
and deal with them. <br>
<blockquote type="cite">Ãffff, I<br>
<blockquote type="cite">had 10 years<br>
of<br>
experience<br>
with this issue in computer control of chemical processing reactors. In<br>
the<br>
1970s, we had a safety authority write software to: 1. identify well-<br>
established dangerous states of a reactor; 2. Transition reactor to a safe<br>
state<br>
ÃffffÃff,Ãf,Ã,¢?" usually shut down. It was called the ÃffffÃff,Ãf,Ã,¢?<br>
</blockquote>
oabortÃffffÃff,Ãf,Ã,¢?<br>
<blockquote type="cite">Ãfff,Ãff,Ãf,Ã,Â<br>
programming. This<br>
abort code<br>
overrode the control software written by another team. By today's<br>
standards<br>
it was primitive but still very effective.<br>
This experience has influenced my thinking on ways and means of dealing<br>
with ÃffffÃff,Ãf,Ã,¢?obadÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã, or <br>
</blockquote>
</blockquote>
ÃffffÃff,Ãf,Ã,¢?<br>
<blockquote type="cite">ohallucinatingÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â<br>
<blockquote type="cite">AIs. We need a<br>
separate and<br>
distinct<br>
ÃffffÃff,Ãf,Ã,¢?oabortÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã, AI<br>
capable of recognising evil and dealing with it.<br>
Sal Khan (Khan Academy) has implemented another idea in creating his<br>
Khanmigo personal tutor. To improve its accuracy in mathematics, he allows<br>
the AI to have ÃffffÃff,Ãf,Ã,¢?opersonal thoughtsÃffffÃff,Ãf,Ã,¢?<br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,Â. The AI<br>
<blockquote type="cite">
<blockquote type="cite">equivalent of<br>
mindfulness<br>
(straight out of Marcus Aurelius). He gives the impression that he feeds<br>
back<br>
the AI's response to the AI, so the AI can have a think about what it is<br>
thinking. A bit like briefing a fish on the concept of water.<br>
<br>
This supports my pitch for engineers to be taught philosophy. The concepts<br>
of virtue and evil were defined simply 2000 years ago. These definitions<br>
have<br>
stood outside of time and can be implemented with appropriate pattern<br>
matching.<br>
If Marcus Aurelius was reincarnated, he'd be running international<br>
seminars<br>
on mindfulness. The foundation ideas have not changed a wit.<br>
<br>
Cheers<br>
Les<br>
<br>
C2 - Confidential<br>
<br>
Answer to the first question. Once the NN is trained, it will always<br>
give the<br>
same output for the same input. It is stupid software logic.<br>
<br>
Answer to the second question. The current safety paradigm is based on<br>
perception-interpretation-decision-action loops based on the fact that:<br>
1 Either, all states of the system, leading to all identified losses,<br>
can be<br>
exhaustively defined and their probability of occurrences remains, by<br>
concept and construction of the system, below a tolerable level, enabling<br>
fully autonomous functions (e.g. emergency shutdown functions in process<br>
industries),<br>
<br>
2 Or, there is a human in control of the system with the ability of<br>
arbitrating the decisions, meaning the ability to at least perceive and<br>
interpret.<br>
<br>
AI deceives point 1 as it impossible, by construction, to identify all<br>
dangerous situations (the training dataset is incomplete and whatever its<br>
size, the operational domain being infinite (or close to infinite from a<br>
practical<br>
point of view), training dataset size divided by infinite = 0.<br>
<br>
To deceive the second assumption, just replace a car windscreen by a<br>
video<br>
screen. No need of AI.<br>
<br>
AttentionÃfffff,Ãffff, : due to increased cybersecurity screening I may<br>
receive<br>
external<br>
emails 30 minutes after their emission.<br>
RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert Program Support Manager<br>
Optronics and Defence Division/Customer Support Safran Electronics &<br>
Defense<br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64<br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE<br>
https://urldefense.com/v3/__http://www.safran-electronics-<br>
defense.com_<br>
_;!!Dl6pPzL6!fVuNoqxkdCCqescc5hd8-9ke7-<br>
o3uVuZEOBtugdgtfrHUBcO3T2a6LyI_<br>
lZBwk7CZBJRmOhVQnZHXCdvIB_h5MY$<br>
<br>
-----Message d'origine-----<br>
DeÃfffff,Ãffff, : Les Chambers <les@chambers.com.au><br>
EnvoyÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©Ãfffff,Ãffff, :<br>
jeudi 22<br>
juin<br>
2023 23:46 ÃffffffÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãfffff,Ãffff, : RICQUE <br>
</blockquote>
</blockquote>
Bertrand (SAFRAN<br>
<blockquote type="cite">
<blockquote type="cite">ELECTRONICS &<br>
DEFENSE)<br>
<bertrand.ricque@safrangroup.com>; koopman.cmu@gmail.com;<br>
systemsafety@lists.techfak.uni-bielefeld.de<br>
ObjetÃfffff,Ãffff, : RE: [SystemSafety] AI and the virtuous test Oracle<br>
<br>
CAUTION: This message originated from an outside organization. In<br>
case of suspicion, click on "Report to SAFRAN Security" from the<br>
Outlook<br>
ribbon.<br>
<br>
RE your comment, <br>
</blockquote>
</blockquote>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."A neural <br>
<blockquote type="cite">network is<br>
<blockquote type="cite">software<br>
<br>
</blockquote>
</blockquote>
logic.ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ãf,<br>
Ã,Â<br>
<blockquote type="cite">
<blockquote type="cite"><br>
A neural Network is a component of an AI agent. An AI agent is a<br>
suite of general purpose software tools. The agent is configured from a<br>
dataset.<br>
<br>
ChatGPT goes on (see dialog below):<br>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."Synthesizing <br>
</blockquote>
</blockquote>
an AI<br>
<blockquote type="cite">
<blockquote type="cite">agent<br>
involves training the machine learning model on a large<br>
<br>
</blockquote>
</blockquote>
dataset.ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ã<br>
f,Ã,Â<br>
<blockquote type="cite">
<blockquote type="cite"><br>
This indicates that the training dataset determines the<br>
<br>
</blockquote>
</blockquote>
agentÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃff,Ãf,Ã,¢"Ãf<br>
ff,Ãff,Ãf,Ã,¢s<br>
<blockquote type="cite">
<blockquote type="cite">behaviour (transfer function - inputs to outputs). In the case of a<br>
Tesla motor<br>
vehicle,<br>
the inputs are camera sensors and the outputs will soon be controls<br>
to the accelerator, brake and steering.<br>
<br>
RE your comment, <br>
</blockquote>
</blockquote>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."The problem is <br>
<blockquote type="cite">not AI. It<br>
<blockquote type="cite">is the<br>
exhaustive<br>
deterministic<br>
relationship between the undesired losses and the system states.<br>
Anything breaking this, breaks the current safety paradigm. And AI<br>
is not the single thing to achieve<br>
<br>
</blockquote>
</blockquote>
this.ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ãf,Ã<br>
,Â<br>
<blockquote type="cite">
<blockquote type="cite"><br>
Could you please expand on this idea. Possibly using an example.<br>
<br>
How would your thought apply here: I am a driver at the wheel of a<br>
Tesla travelling at 110 km/h on the freeway. I have switched the<br>
vehicle to full<br>
self-<br>
driving mode. I am fast asleep. The FSD agent detects a false<br>
positive <br>
</blockquote>
</blockquote>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃff,Ãf,Ã,¢?o<br>
<blockquote type="cite">
<blockquote type="cite">a lady<br>
with a pram. It jams on the brakes initiating a rear-end collision<br>
with a following semi.<br>
Q: In this case how is it that, <br>
</blockquote>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."The problem<br>
<blockquote type="cite">is not<br>
<br>
</blockquote>
</blockquote>
AIÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ãf,Ã,Â<br>
<br>
<blockquote type="cite">
<blockquote type="cite"><br>
Les<br>
<br>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃff,Ãf,Ã,¢?<br>
<br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â<br>
<blockquote type="cite">
<blockquote type="cite">ChatGPT Dialog<br>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃff,Ãf,Ã,¢?<br>
<br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite"><br>
</blockquote>
</blockquote>
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf<br>
f,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â<br>
<blockquote type="cite">
<blockquote type="cite"><br>
ChatGPT prompt:<br>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."Describe an <br>
</blockquote>
</blockquote>
AI agent from a <br>
<blockquote type="cite">technical<br>
<blockquote type="cite">perspective. What<br>
are its<br>
components<br>
and<br>
how is it synthesized?<br>
</blockquote>
<br>
</blockquote>
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ãf,Ã,Â<br>
<blockquote type="cite">
<blockquote type="cite"><br>
Response:<br>
An AI agent, from a technical perspective, typically consists of<br>
several components that work together to perform specific tasks.<br>
Here are the key components and how they are synthesized:<br>
<br>
1. Data: The foundation of any AI agent is data. It can be in<br>
various forms such as text, images, audio, or structured data. This<br>
data is used for<br>
training<br>
and providing context to the AI model.<br>
<br>
2. Machine Learning Model: The AI agent utilizes a machine learning<br>
model<br>
to<br>
process and generate responses. These models are trained on vast<br>
amounts of data and learn patterns to make predictions or generate<br>
outputs.<br>
<br>
3. Natural Language Processing (NLP): NLP is a crucial component for<br>
understanding and processing human language. It involves tasks like<br>
language translation, sentiment analysis, intent recognition, and<br>
entity extraction.<br>
NLP<br>
enables the AI agent to comprehend user inputs and generate relevant<br>
responses.<br>
<br>
4. Knowledge Representation: Knowledge representation refers to how<br>
information is stored and organized within the AI agent. It can<br>
employ various<br>
techniques<br>
such as semantic networks, ontologies, or graph databases to store<br>
facts<br>
and<br>
relationships. This allows the agent to access and utilize knowledge<br>
when responding to user queries.<br>
<br>
5. Dialogue Management: Dialogue management focuses on handling the<br>
flow of conversation. It involves tracking the dialogue context,<br>
managing user requests and responses, and generating meaningful and<br>
coherent replies.<br>
Reinforcement<br>
learning techniques can be applied to improve dialogue management.<br>
<br>
6. Integration and APIs: AI agents can integrate with external<br>
systems and services using APIs (Application Programming<br>
Interfaces). This allows them<br>
to<br>
access relevant data sources, databases, or external functionalities<br>
to provide more comprehensive and accurate responses.<br>
<br>
7. User Interface: The AI agent needs a user interface to interact<br>
with<br>
users<br>
effectively. It can be a chat interface, voice-based interface, or a<br>
combination of both. The UI should provide a seamless experience for<br>
users to interact with the agent.<br>
<br>
Synthesizing an AI agent involves training the machine learning<br>
model on a large dataset. This process includes steps like<br>
preprocessing the data, designing the model architecture, selecting<br>
appropriate algorithms,<br>
training<br>
the model using optimization techniques, and fine-tuning it for<br>
specific<br>
tasks.<br>
The trained model is then integrated with other components, such as<br>
NLP, knowledge representation, and dialogue management, to create a<br>
functioning AI agent. Regular updates, feedback loops, and<br>
continuous improvement are essential to enhance the agent's<br>
performance over time.<br>
<br>
--------END Chat ----------<br>
<br>
C2 - Confidential<br>
<br>
The problem is not AI. It is the exhaustive deterministic<br>
relationship<br>
between the undesired losses and the system states. Anything<br>
breaking this, breaks the current safety paradigm. And AI is not the<br>
single thing to<br>
achieve<br>
this.<br>
<br>
AttentionÃffffffÃffffÃff,Ãf,Ã,¢?sÃfffff,Ãffff, : due to increased <br>
</blockquote>
</blockquote>
cybersecurity<br>
<blockquote type="cite">
<blockquote type="cite">screening I may<br>
receive<br>
external<br>
emails 30 minutes after their emission.<br>
RESTRICTED<br>
<br>
Bertrand RICQUE<br>
Safety and security standardisation Expert Program Support Manager<br>
Optronics and Defence Division/Customer Support Safran Electronics<br>
& Defense<br>
<br>
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64<br>
bertrand.ricque@safrangroup.com<br>
102 Avenue de Paris<br>
91300 MASSY FRANCE<br>
https://urldefense.com/v3/__http://www.safran-electronics-<br>
defense.com__;!!Dl6pPzL6!bg5nEqH6ID136htdDa-<br>
<br>
DgyRz2IZw9arqA9HPKC3p01ZYHoQhzebcyOn5xrgApDNO52A_sLRMk2YgX63j<br>
n5fk7M4$<br>
<br>
-----Message d'origine-----<br>
DeÃffffffÃffffÃff,Ãf,Ã,¢?sÃfffff,Ãffff, : systemsafety <systemsafety-<br>
bounces@lists.techfak.uni-<br>
bielefeld.de><br>
De la part de Phil Koopman<br>
EnvoyÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©ÃffffffÃffffÃff,Ãf,Ã,¢?<br>
</blockquote>
</blockquote>
sÃfffff,Ãffff, : jeudi <br>
<blockquote type="cite">22 juin<br>
<blockquote type="cite">2023 03:32<br>
ÃffffffÃffff?'ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢?<br>
</blockquote>
</blockquote>
sÃffff,Ãfff,Ãff,Ãf,Ã,¬ÃffffffÃffffÃff,Ãf,Ã,¢?<br>
<blockquote type="cite">sÃfffff,Ãffff, :<br>
<blockquote type="cite">les@chambers.com.au; systemsafety@lists.techfak.uni-<br>
bielefeld.de<br>
ObjetÃffffffÃffffÃff,Ãf,Ã,¢?sÃfffff,Ãffff, : Re: [SystemSafety] AI and the <br>
</blockquote>
</blockquote>
virtuous <br>
<blockquote type="cite">test<br>
<blockquote type="cite">Oracle<br>
<br>
CAUTION: This message originated from an outside organization.<br>
In case<br>
of<br>
suspicion, click on "Report to SAFRAN Security" from the Outlook<br>
ribbon.<br>
<br>
Les,<br>
<br>
Since you welcome riffs, I have something that is not as all-<br>
encompassing,<br>
but might have more immediate application.<br>
<br>
I propose that to the degree that "AI" technology is deployed in<br>
a way<br>
that<br>
supplants practical human judgement, the manufacturer of that<br>
system<br>
(in<br>
some cases just the AI part if it is an add-on component) should<br>
be<br>
held<br>
accountable for any action (or inaction) that, if associated<br>
with the<br>
human<br>
that was supplanted, would have constituted <br>
</blockquote>
</blockquote>
negligence.ÃffffffÃffffÃff,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite">sÃfffff,Ãffff,<br>
This<br>
should<br>
include<br>
situations in which a human is put in an untenable situation of<br>
supervising<br>
an<br>
AI in a way that puts unreasonable demands upon them, amounting<br>
to a "moral crumple zone"<br>
approach<br>
<br>
(https://urldefense.com/v3/__https://papers.ssrn.com/sol3/papers.cfm<br>
?ab<br>
<br>
stract_id=2757236__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4<br>
gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUivQZX7h$ ).<br>
Liability/negligence if an AI is in substantive control of such<br>
a<br>
situation<br>
should<br>
attach to the manufacturer.<br>
<br>
This leads to a more narrow oracle, but perhaps still useful,<br>
than you propose. If a loss event is caused by a lack of<br>
"reasonable" behavior<br>
by an<br>
AI,<br>
the manufacturer is on the hook for negligence, and the<br>
AI/manufacturer owes a duty of care the same as the human who<br>
was supplanted would<br>
have<br>
owed to whoever/whatever might be affected by that negligence.<br>
It has<br>
the<br>
advantage of reusing existing definitions of "reasonable person"<br>
that<br>
have<br>
been hammered out over decades of law. (To be sure that is not<br>
in the<br>
form<br>
of an engineering specification, but case law has a pretty<br>
robust set<br>
of<br>
precedents, such as crashing into something after your properly<br>
functioning<br>
vehicle ran a red light is likely to lead to the driver being<br>
found<br>
negligent.)<br>
<br>
This does not require the AI to behave the same as people, and<br>
is not a<br>
full<br>
recipe for "safe" AI. But it puts a floor on things in a way<br>
that is<br>
readily<br>
actionable using existing legal mechanisms and theories. If a<br>
reasonable<br>
person would have avoided a harm, any AI that fails to avoid the<br>
harm<br>
would<br>
be negligent.<br>
<br>
I've worked with a lawyer to propose this approach for automated<br>
vehicles,<br>
and it is starting to get some traction. What I write in this<br>
post<br>
(above)<br>
is a<br>
generalization of the concept beyond the narrow automated<br>
vehicle application.<br>
Details here:<br>
<br>
https://urldefense.com/v3/__https://safeautonomy.blogspot.com/2023/0<br>
5<br>
/a-liability-approach-for-<br>
<br>
automated.html__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg<br>
9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUh295b5I$<br>
<br>
-- Phil<br>
<br>
On 6/21/2023 7:14 PM, Les Chambers wrote:<br>
Hi All<br>
<br>
I find myself reflecting on what will become of us.<br>
As systems engineering best practice is overrun by AI.<br>
<br>
Practitioners report that neural networks are eating code.<br>
Example 1: The vector field surrounding a Tesla motor vehicle<br>
is an output of a neural network, not the result of software<br>
logic. Soon<br>
the<br>
neural net - not code - will generate controls. The size of<br>
the code base is reducing. (Elon<br>
Musk)<br>
Example 2: the ChatGPT transformer code base is only 2000 LOC<br>
(Mo Gawdat<br>
https://urldefense.com/v3/__https://youtu.be/bk-<br>
nQ7HF6k4__;!!Dl6pPzL6!<br>
<br>
dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xN<br>
rnA0Av<br>
-DS0QOuOFaUpIawiVG$ )<br>
<br>
The intelligence resides in terabytes of data, perceptrons and<br>
millions of weighting parameters. All are gathered by<br>
automated<br>
means.<br>
Not subject to human review.<br>
<br>
Ergo what will become of our trusty barriers to dangerous<br>
failure:<br>
1. Safety functions - gone<br>
2. Verification - gone<br>
3. Code reviews - gone<br>
4. Validation - How?<br>
<br>
On validation, may I suggest the moral AI. A test oracle built<br>
on a virtuous dataset, capable of interrogating the target<br>
system to determine virtue. Test outcomes will morph from<br>
pass/failure to<br>
moral/immoral.<br>
<br>
Credible industry players have predicted that soon we will<br>
have AIs orders of magnitude smarter than us. Especially when<br>
they start<br>
talking<br>
to<br>
each other.<br>
The bandwidth will be eye-watering - the increase in<br>
intelligence,<br>
vertical.<br>
<br>
New barriers are required. Time to develop an AI that is on<br>
our side<br>
ÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¢ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢?<br>
<br>
</blockquote>
</blockquote>
sÃffff,Ãfff,Ãff,Ãf,Ã,¬ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ã<br>
ffff."<br>
<blockquote type="cite">
<blockquote type="cite">the side of ethics and the moral life. An adult in the room if<br>
you like. We should birth this creature now and raise it as good<br>
parents.<br>
<br>
Let us not panic. May I put the proposition: virtue, like<br>
creativity, can be algorithmic.<br>
I have a sense of starting from the beginning - tabula rasa. I<br>
suggest<br>
that high-level thinking on the subject could begin with<br>
ChatGPT<br>
prompts:<br>
1. What is the stoic <br>
</blockquote>
<br>
</blockquote>
philosopherÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¢ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢?<br>
<blockquote type="cite">
<blockquote type="cite">sÃffff,Ãfff,Ãff,Ãf,Ã,¬ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢?<br>
zÃffff,Ãfff,Ãff,Ãf,Ã,¢s concept of<br>
virtue?<br>
2. What are the elements of philosophy relevant to AI?<br>
<br>
Let us not forget our engineering mission: Guardians of the<br>
divine Logos, the organizing principle of the universe,<br>
responsible for its creation, maintenance, and order.<br>
<br>
Would anyone care to riff on this?<br>
<br>
Les<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
systemsengineeringblog.com<br>
<br>
+61 (0)412 648 992<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription:<br>
https://urldefense.com/v3/__https://lists.techfak.uni-<br>
bielefeld.de/mai<br>
<br>
lman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaC<br>
GJCz<br>
dLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-<br>
DS0QOuOFaUjXMdlNF$<br>
<br>
--<br>
Prof. Phil Koopman koopman@cmu.edu<br>
(he/him)<br>
<br>
https://urldefense.com/v3/__https://users.ece.cmu.edu/*koopman/__;fg!<br>
<br>
!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YM<br>
f2lF9o5xNrnA0Av-DS0QOuOFaUnsUrevc$<br>
<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription:<br>
https://urldefense.com/v3/__https://lists.techfak.uni-<br>
<br>
bielefeld.de/mailman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVL<br>
cd<br>
XXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-<br>
DS0QOuOFaUjXMdlNF$<br>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent<br>
contenir des<br>
informations confidentielles, <br>
</blockquote>
</blockquote>
ÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,ªtre soumis <br>
<blockquote type="cite">aux<br>
<blockquote type="cite">rÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,¨glementations<br>
relatives au<br>
contrÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,´le des exportations ou <br>
</blockquote>
</blockquote>
ayant un<br>
<blockquote type="cite">
<blockquote type="cite">caractÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,¨re<br>
privÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©.<br>
S'ils ne<br>
vous sont<br>
pas destinÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©s, nous vous signalons <br>
</blockquote>
</blockquote>
qu'il est<br>
<blockquote type="cite">
<blockquote type="cite">strictement<br>
interdit<br>
de les divulguer, de les reproduire ou d'en utiliser de quelque<br>
maniÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,¨re que ce<br>
soit<br>
le contenu. Toute exportation ou <br>
</blockquote>
rÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©exportation<br>
<blockquote type="cite">non<br>
autorisÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©e<br>
est interdite Si ce message vous a<br>
<br>
</blockquote>
</blockquote>
ÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©tÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff<br>
,Ãf,Ã,©<br>
<blockquote type="cite">
<blockquote type="cite">transmis par<br>
erreur,<br>
merci d'en informer l'expÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©diteur <br>
</blockquote>
</blockquote>
et de <br>
<blockquote type="cite">supprimer<br>
<blockquote type="cite">immÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©diatement<br>
de votre systÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,¨me informatique ce <br>
</blockquote>
</blockquote>
courriel <br>
<blockquote type="cite">ainsi<br>
<blockquote type="cite">que tous<br>
les<br>
documents qui y sont attachÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©s."<br>
******<br>
" This e-mail and any attached documents may contain confidential<br>
or<br>
proprietary information and may be subject to export control laws<br>
and regulations. If you are not the intended recipient, you are<br>
notified that<br>
any<br>
dissemination, copying of this e-mail and any attachments thereto or<br>
use of their contents by any means whatsoever is strictly<br>
prohibited. Unauthorized export or re-export is prohibited. If you<br>
have received this e-mail in<br>
error,<br>
please advise the sender immediately and delete this e-mail and all<br>
attached<br>
documents from your computer system."<br>
#<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent contenir<br>
des<br>
informations confidentielles, ÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,ªtre soumis aux<br>
rÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¨glementations<br>
relatives au<br>
contrÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,´le des exportations ou ayant un<br>
caractÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¨re<br>
privÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©.<br>
S'ils ne<br>
vous sont<br>
pas destinÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©s, nous vous signalons qu'il est <br>
</blockquote>
</blockquote>
strictement<br>
<blockquote type="cite">
<blockquote type="cite">interdit<br>
de les<br>
divulguer, de les reproduire ou d'en utiliser de quelque<br>
maniÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¨re<br>
que ce<br>
soit<br>
le contenu. Toute exportation ou rÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©exportation <br>
</blockquote>
</blockquote>
non<br>
<blockquote type="cite">
<blockquote type="cite">autorisÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©e est<br>
interdite Si ce message vous a <br>
</blockquote>
</blockquote>
ÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©tÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,© <br>
<blockquote type="cite">transmis<br>
<blockquote type="cite">par erreur,<br>
merci<br>
d'en<br>
informer l'expÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©diteur et de supprimer<br>
immÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©diatement de<br>
votre<br>
systÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¨me informatique ce courriel ainsi que tous <br>
</blockquote>
</blockquote>
les<br>
<blockquote type="cite">
<blockquote type="cite">documents qui<br>
y<br>
sont<br>
attachÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©s."<br>
******<br>
" This e-mail and any attached documents may contain confidential or<br>
proprietary information and may be subject to export control laws and<br>
regulations. If you are not the intended recipient, you are notified that<br>
any<br>
dissemination, copying of this e-mail and any attachments thereto or use<br>
of<br>
their contents by any means whatsoever is strictly prohibited.<br>
Unauthorized<br>
export or re-export is prohibited. If you have received this e-mail in<br>
error,<br>
please advise the sender immediately and delete this e-mail and all<br>
attached<br>
documents from your computer system."<br>
#<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
#<br>
" Ce courriel et les documents qui lui sont joints peuvent contenir des<br>
informations confidentielles, ÃfffffÃfff,Ãff,Ãf,Ã,ªtre soumis aux<br>
rÃfffffÃfff,Ãff,Ãf,Ã,¨glementations<br>
relatives<br>
au<br>
contrÃfffffÃfff,Ãff,Ãf,Ã,´le des exportations ou ayant un <br>
</blockquote>
</blockquote>
caractÃfffffÃfff,Ãff,Ãf,Ã,¨re<br>
<blockquote type="cite">
<blockquote type="cite">privÃfffffÃfff,Ãff,Ãf,Ã,©.<br>
S'ils ne vous<br>
sont<br>
pas destinÃfffffÃfff,Ãff,Ãf,Ã,©s, nous vous signalons qu'il est <br>
</blockquote>
</blockquote>
strictement <br>
<blockquote type="cite">interdit<br>
<blockquote type="cite">de<br>
les<br>
divulguer, de les reproduire ou d'en utiliser de quelque <br>
</blockquote>
maniÃfffffÃfff,Ãff,Ãf,Ã,¨re<br>
<blockquote type="cite">que<br>
ce<br>
soit<br>
le contenu. Toute exportation ou rÃfffffÃfff,Ãff,Ãf,Ã,©exportation non<br>
autorisÃfffffÃfff,Ãff,Ãf,Ã,©e<br>
est<br>
interdite Si<br>
ce message vous a ÃfffffÃfff,Ãff,Ãf,Ã,©tÃfffffÃfff,Ãff,Ãf,Ã,© transmis <br>
</blockquote>
</blockquote>
par erreur, <br>
<blockquote type="cite">merci d'en<br>
<blockquote type="cite">informer<br>
l'expÃfffffÃfff,Ãff,Ãf,Ã,©diteur<br>
et de supprimer immÃfffffÃfff,Ãff,Ãf,Ã,©diatement de votre <br>
</blockquote>
</blockquote>
systÃfffffÃfff,Ãff,Ãf,Ã,¨me<br>
<blockquote type="cite">
<blockquote type="cite">informatique ce<br>
courriel<br>
ainsi<br>
que tous les documents qui y sont attachÃfffffÃfff,Ãff,Ãf,Ã,©s."<br>
******<br>
" This e-mail and any attached documents may contain confidential or<br>
proprietary information and may be subject to export control laws and<br>
regulations. If you are not the intended recipient, you are notified that<br>
any<br>
dissemination, copying of this e-mail and any attachments thereto or use<br>
of<br>
their contents by any means whatsoever is strictly prohibited.<br>
Unauthorized<br>
export or re-export is prohibited. If you have received this e-mail in<br>
error,<br>
please advise the sender immediately and delete this e-mail and all<br>
attached<br>
documents from your computer system."<br>
#<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription: https://lists.techfak.uni-<br>
bielefeld.de/mailman/listinfo/systemsafety<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
</blockquote>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
</blockquote>
<br>
--<br>
<br>
Les Chambers<br>
<br>
les@chambers.com.au<br>
<br>
+61 (0)412 648 992<br>
</div>
</div>
</div>
<br>
</div>
</body>
</html>