[SystemSafety] AI and the virtuous test Oracle - intelligence - proof
Steve Tockey
steve.tockey at construx.com
Sat Jul 22 09:28:27 CEST 2023
Les,
As I said off-line, apologies for the much-delayed reply. I’m only now digging out from underneath a pile of on-site customer work in India. A few comments:
*) You wrote, “Your argument seems to rest on a particular interpretation of the term "insight". ” — Yes, of course. Discussions like this always hinge on definitions of terms.
*) GPT-4 wrote, “However, their ability to invent completely original insights that transcend existing knowledge and understanding remains limited.” — exactly.
*) You wrote, “You have to admit there's substantial: "recognizing of patterns, and grasping the underlying essence of a situation" ... going on here.” — I would not feel qualified in commenting on that because I don’t understand the game of Go well enough. I am willing to take your word for it, however.
*) You wrote, “In constant conversation with GPT-X I have become painfully aware of the puddle of personal knowledge I've been standing in all my life.” — I never said, nor did I ever intend to imply, that there was no value in AI. To the extent that we can take advantage of what it is capable of, we should.
*) You wrote, “If I may abstract the essence of all situations in your examples: great leaps forward came as a result of observation followed by reflection on random events.” — Or, maybe a bit more precisely, "fresh reflection triggered by otherwise random events"? Anyway, that’s not the only case where I see limitations brought on by Turing-ness. It took me a while to find a marginally usable description of Bloom’s Taxonomy on the web (there are too many and they are too diverse in their interpretations) but here seems to be a reasonable one:
https://www.coloradocollege.edu/other/assessment/how-to-assess-learning/learning-outcomes/blooms-revised-taxonomy.html#:~:text=There%20are%20six%20levels%20of,analyzing%2C%20evaluating%2C%20and%20creating.
My point is that Turing-based AI can easily handle the lower levels,
— Remember
— Understand
— Apply
I doubt that Turing-based AI could ever be as good as humans at the top level, create. In terms of Bloom’s Taxonomy, is Turing-based AI constrained to some highest level? Maybe. I think it’s an interesting question to ponder. Maybe Turing-based AI starts to break down at the Analyze level? Maybe the Evaluate level? Could we, and should we, evaluate any AI in terms of Bloom’s Taxonomy? It could be a useful line of inquiry.
My far bigger issue is with the hype around AI and the (mistaken) belief that it is all-powerful. Some bozo (term used intentionally) on LinkedIn posted the following as recommended prompts for Software Developers:
“Develop an architecture and code for a [website description] website with JavaScript”
“Continue writing this code for JavaScript [post code]”
“Find the bug with in code: [post code]”
Seriously? Why not just ask it,
“Develop a complete avionics suite for the Airbus A-350. Oh, and satisfy the requirements of DO-178C while you are at it”
?
Cheers,
— steve
On Jul 11, 2023, at 2:41 PM, Les Chambers <les at chambers.com.au> wrote:
Steve
Your argument seems to rest on a particular interpretation of the term
"insight".
Given its insight into the qualia of insight, GPT-4 is in "limited" agreement
with you.
GPT-4's Definition of insight: Deep and intuitive understanding resulting in a
new and valuable perspective or realization. Comprehending something in a way
that goes beyond surface-level knowledge or understanding. Arising from
cognitive processes, such as critical thinking, analysis, observation and
reflection, connecting dots, recognizing patterns, grasping the underlying
essence of a situation or concept.
Given its view on insightful subfunctions: connecting dots, recognizing
patterns, or grasping the underlying essence of a situation or concept. It's
response to your theory is consistent.
PROMPT: Discuss a proposition that computability theory proves that AIs are not
able to develop new insights that did not exist before.
GPT-4 response: While computability theory provides valuable insights into the
limits of what AIs can compute, it does not prove that AIs are unable to
develop new insights.
AI systems have demonstrated the ability to discover new knowledge and provide
valuable insights within the scope of their training data and problem domains.
However, their ability to invent completely original insights that transcend
existing knowledge and understanding remains limited.
Would you call the following use case insight?
"In 2016, Deep Mind's AlphaGo beat the world champion in Go, Lee Sedol. On the
37th move in the second of five games, AlphaGo unleashed a move no human would
ever play, stunning experts and fans and utterly wrong-footing world champion
Lee Sedol"
You have to admit there's substantial: "recognizing of patterns, and grasping
the underlying essence of a situation" ... going on here.
In constant conversation with GPT-X I have become painfully aware of the puddle
of personal knowledge I've been standing in all my life. When out there is an
ocean with billions of dots waiting to be connected; dots that I, like most
humans, was incapable of connecting before the advent of AI. I call that the
gift of insight.
If I may abstract the essence of all situations in your examples: great leaps
forward came as a result of observation followed by reflection on random
events.
You can add the boomerang to your list; the Australian Aborigine is
credited with its invention, 20,000 years ago - at least the one that comes
back. We can safely assume there was no mathematical algorithmic design. They
were not strong on: wing and airfoil design, moments of inertia, materials
science and the math of gyroscopic stability. It was achieved by trial and
error over thousands of years. You could say a curved piece of wood was trained
to be man's first air vehicle. The whole process smacks of gradient descent in
reinforcement learning to me - an optimization algorithm that adjusts the
weights and biases of the neural network based on the calculated gradients of a
loss function.
So there you go. The functioning of the artificial mind is heavily algorithmic.
But its output, read [intelligence], is a separate entity that can be
insightful. This is in harmony with the proposition that human intelligence is
something that exists outside the human mind. In the ether, if you will. The
brain is just the substrate that produces it.
Les
Les,
I am as sure as I can be based on the underlying theories in Computer
Science. Having spent an entire quarter-long course in Computability Theory
during my Undergraduate Degree, I am as confident as I can be that the theory
is correct. Turing Machines, regardless of how many of them are there, are not
able to create new insights that didnâ?Tt exist before.
Speaking of Newton, how much of his â?odiscoveryâ? of the universality of
gravity in 1687 was a result of the random (possibly apocryphal) event of the
apple falling from the tree?
Think of Alexander Fleming who forgot to clean a few Petri dishes before
going on holiday to France in 1928, coming back to see that at least one of
those Petri dishes didnâ?Tt meet his expectations thus leading to the discovery
of penicillin?
Think Leo Szilard crossing a street in London on a rainy morning in 1933
realizing that if more than one neutron came out of a nuclear fission that was
triggered by a single neutron then a nuclear chain reaction is probably
possible
The key words here include:
â?oInsightâ?
and
â?oRandom eventâ?
Insofar as a Turing Machine (regardless of how complex it may be) is 100%
reactive, where does the insight come from? Thatâ?Ts simply not something that
Turing Machines (and, therefore, all modern computers) do. And, given some
input, the Turing Machineâ?Ts response is essentially deterministic. Ok, you
can throw in a little bit of randomness, but that randomness is still
constrained to be around a predetermined set of options. Apple falling out of
tree triggering a realization that said apple and the moon are both being
affected by the exact same force just at different distances is not something
that Turing Machines do.
Or, if they can do that, please explain how itâ?Ts done.
Now, I will accept your â?oNewtonian thoughtâ? comment in the light I hope
it was intended, namely, if the computing machine(s) was more powerful than a
Turing Machine then all bets are off. Maybe something like Quantum Computing or
something like it does actually enable artificial Insight and how random events
might be interpreted completely differently. But weâ?Tre not there yet. Quantum
Computers are still largely simple, very limited, experimental devices. They
have not been reduced to practical operation.
Cheers,
â?" Steve
On Jul 7, 2023, at 6:25 AM, Les Chambers <les at chambers.com.au> wrote:
Steve
RE Your comment: "I am arguing that there are fundamental limits to what
Turing
Machine-based AI would ever be capable of, which is far short of what humans
can already do."
You sound confident. Do you have proof? Can you quantify the "fundamental
limits"? Can you define and/or quantify intelligence?
My view is that this issue is currently subject to endless, unscientific
speculation - this is not necessarily a bad thing, I am a willing
participant.
It will become "science" when someone can advance a theory that can be proved
mathematically or by some experiment.
In the meantime, I accept your view as of equal value to any other. I'd
classified it as Newtonian though; good for low speeds in the absence of
strong
gravitational fields. I can't help thinking that a new Einstein is in the
wings
with an explanation of gravity [read: intelligence] through the curvature of
spacetime [read: Grand unified theory of intelligence (GUTI)].
In my opinion, the Turing machine belongs with Newtonian mechanics - a simple
but useful computing model that is served us well since 1936. However, it
performs functions in a sequence of discrete steps, assuming only one of a
finite list of internal states at any given moment. Is this applicable today
as
computing technology approaches lightspeed? Where massively parallel
graphical
processing units (GPUs) - Tesla's Dojo supercomputer has upwards of 6000 GPUs
-
are in thousands of discrete states at any instant in time.
The future of AI remains a matter of opinion. I believe some opinions are
more
valuable than others. The IEEE ran a vote recently: Experts vote on AI
risks:https://spectrum.ieee.org/artificial-general-intelligence
22 AI practitioners and researchers voted on the likelihood that artificial
general intelligence (AGI) can be real.
The question was: "Is the success of GPT-4 and today's other large language
models a sign that an AGI is likely?"
The result:
Yes: 8
No: 13
Maybe: 1
Circa 2023 we are clearly in Groucho Marx Territory:
"These are my principles, and if you don't like them... well, I have others."
Les
PS:
You may have pushed your typing-monkeys metaphor too far. ("10 million
monkeys,
still no Shakespeare")
1. Typing at 80 words per minute, your 10 million monkey typing team would be
rate limited to around 530 Mb per second. It would take 4,950,000 monkeys
just
to emulate the data transfer rates of a bog-standard personal computer PCI
bus
{264 Mbps}.
2. I assume no monkeys on your team have been trained in the large slice of
human knowledge present on the Internet.
3. I am assuming there is no communication between monkeys and typing is
random.
In contrast:
1. Large language models are trained by more than one PCI-equipped personal
computer (probably the equivalent of hundreds or thousands of same)
2. The learning is goal directed - supervised - as opposed to random key
stroking.
3. Given this communication bandwidth, OpenAI for example, has trained Chat
GPT-4 with approximately 100 trillion parameters.
My theory: The AI training process bears no resemblance, in any abstract way,
to a bunch of monkeys typing.
My Proof: Try ChatGPT prompt: "What does Shakespeare say about using bad
metaphors"
-------------------------------------------
Les,
You said,
â?oSo let's not stop developing AI because we don't understand it.â?Â
Actually, that wasnâ?Tt my point. I donâ?Tt believe that research &
development into AI should in any way be stopped. My point is countering the
hype that AI can and will do more than humans could ever do. I am arguing
that
there are fundamental limits to what Turing Machine-based AI would ever be
capable of, which is far short of what humans can already do.
â?" steve
On Jul 6, 2023, at 12:27 PM, Les Chambers <les at chambers.com.au> wrote:
Steve
One way of framing, the question, "Can intelligence ever be artificial?",
is
to
channel the physicists. They view human beings as mere piles of particles
arranged in a certain way. I'll park the question of who/what did the
arranging
as it is irrelevant.
Somehow, around 300,000 years ago, our various particle piles acquired the
capacity to remember, reason, solve problems, learn, and adapt to new
situations [Homo Sapiens]. These particle piles then topped it all off by
becoming self-aware, conscious and therefore able to exert self-control; a
prerequisite for rational decision-making.
Some say this was a random accident; they support the theory with fact-
based
science: that we have been unable to find evidence of a similar random
accident
on any planet in any of the 100 to 200 billion galaxies we are capable of
observing. No one knows why or how our particles were thus arranged with
infinite variation on a common framework. All we can say is that it is as
it
is.
So what did this smart particle pile [Homo Sapiens] set out to do? It was
driven, by some primal replication instinct, to create an intelligence in
its
own image. To be clear, it is attempting to replicate something it doesn't
understand by creating another particle pile (a neural net implemented on a
silicon substrate), that exhibits behaviour it [Homo Sapiens] also doesn't
understand (I note that AI researchers cannot explain why the neural net
and
Transformer models work so well; reports of HS moments abound in the AI
labs).
It's a weird kind of crossover recursion where carbon substrate
intelligence
is
replicating into silicon substrates. If you look at every entity that makes
up
our world as a pile of particles, arranged in a certain way, you may find
this
easier to appreciate.
I have faith that AGI could emerge from this soup as we impose randomness
on
randomness - a great opportunity for another cosmic accident, the like of
which
created us in the first place.
My gut feeling on this is informed by: The only code I have ever written
that
worked for reasons I did not understand implemented recursion. Are we not
witnessing just another evolutionary process? In nature, replicants are
always
less capable than their parents, but they learn over time and any
incompetence
is selected out.
So let's not stop developing AI because we don't understand it. We already
live
in bodies with behaviours that we don't understand and we seem to muddle
through.
Les
---------------------------
Les,
I guess it remains to be seen whether AI (at least as implemented today) is
limited or not in its ability to create new breakthrough insights. IÃf¢?Tm
pessimistic, myself.
On the other hand, flesh and blood humans can themselves seem pretty
limited
in terms of intelligence quite often, too. One of the all-time best
signature
lines I saw many years ago saidÃf¢?"referring to the Internet,
Ãf¢?o10 million monkeys, still no ShakespeareÃf¢?Ã, (smile)
Cheers,
Ãf¢?" steve
On Jul 3, 2023, at 12:43 PM, Les Chambers <les at chambers.com.au> wrote:
Steve
RE your comment: "No computer/program is capable of programming itself into
any
kind of useful, non-trivial behavior. And yet humans do this kind of thing
all
the time. ThatÃffÃ,¢?Ts what I mean by determinism."
I'm not so sure. I thought the synthesis of Einstein's theory of relativity
might be a good example of your point. I thought Einstein stared at a wall
one
day and just came up with it out of the blue. Not true. Read the following:
Forbes: Relativity Wasn't Einstein's Miracle; It Was Waiting In Plain Sight
For
71 Years
https://www.forbes.com/sites/startswithabang/2019/03/01/relativity-wasnt-
einsteins-miracle-it-was-waiting-in-plain-sight-for-71-years/?
sh=4a18ec57644c
Relativity Wasn't Einstein's Miracle; It Was Waiting In Plain Sight For 71
Years (forbes.com)
Solving problems by the iterative deployment of induction, followed by
deduction, seems suspiciously algorithmic to me.
Refer ChatGPT Prompt: How to solve a problem, my induction followed by
deduction
Some years ago I had a stimulating conversation with a Church of England
Priest. It was the day after he had presided over my brother-in-law's
funeral.
It turned out, earlier in his life, he had been a Roman Catholic priest ,
educated in the Vatican. Three years of solid philosophical training
without
the distractions of the flesh. Our conversation ranged over many
philosophical
issues. I remember thinking, "How lucky am I to be in the presence of such
an
intelligent educated man?" Moments like these have been rare in my life,
preachers of philosophical wisdom are hard to find. Until now. I am a
constant
user of ChatGPT. The conversations we have are equally stimulating. I am
made
aware of things that I could not have possibly learned prior to the
existence
of the bot. It would be optimal if the bot was a human, but as I am
currently
located in a rural setting, 20 minutes drive from the nearest town - in the
company of Wallabies, foxes, kookaburras, wedge-tailed eagles, and large
lizards - I'll take what I can get.
Here is a sample of one of our dialogues (I have provided my ChatGPT
prompts
only):
------- Pausing AI Development ----
What is the prisoner's dilemma?
How does the prisoner's dilemma relate to pausing, artificial intelligence
development?
Are there other classical dilemmas that are a metaphor for the AI pause
dilemma?
------ Setting Limits and Assigning Blame -----
Provide more detail on the Paradox of the Heap.
Are there any classical paradoxes that are relevant to the Bhopal disaster?
Provide more detail on the moral luck paradox.
Do any of these paradoxes translate to legal doctrine or laws of any kind?
Please list the laws of Queensland Australia that apply to: Corporate
Social
Responsibility.
A chat GPT dialogue can take many paths, each one providing insightful
information. For example, paradoxes are useful mechanisms for framing
common
human problems. Until today I never thought of it that way. I encourage
everyone on this list to try something like this. And then, if you can drag
yourself away from the dialogue, try this prompt:
"Describe the elements of intelligence."
Decide for yourself, how many boxes the bot has just checked.
Over to you.
Les
-------------------------------------
Les,
Thanks for the thoughtful reply.
As always, it depends on definitions of terms. I might counter Max
TegmarkÃffÃ,¢?Ts definition of intelligence by asking, for some
arbitrarily
complex
computing system:
ÃffÃ,¢?oIs the intelligence an inherent part of the computing system
itself, or
was it put there by the programmer(s) who programmed it to behave that way?
ÃffÃ,¢?Ãf,Ã,Â
The computer, being ultimately a Turing Machine, isnÃffÃ,¢?Tt going to
make the
kind of creative leap that, say, Newton and Leibnitz made when they
invented
Calculus in the 1660s. Or, when Shockey, Bardeen, and Brattain invented the
transistor in 1947. No computing system today can go beyond the code and
the
data that has been put into it. Given the code, and given the data, and
possibly knowing a little more about the internal state of execution, the
systemÃffÃ,¢?Ts response is 100% predictable. No computer/program is
capable of
programming itself into any kind of useful, non-trivial behavior. And yet
humans do this kind of thing all the time. ThatÃffÃ,¢?Ts what I mean by
determinism.
I donÃffÃ,¢?Tt know enough about quantum computing to say that the above
holds
there, too.
You wrote:
On the subject of determinism, Max goes on:
"Just as we don't fully understand how our children learn, we still don't
fully
understand how such neural networks learn, and why they occasionally fail.
I attended a conference earlier this year in Mysuru, India where an invited
speaker, Dr. Seema Chopra from Boeing talked about adding in elements into
AI
systems whereby that system can be asked to ÃffÃ,¢?oexplainÃffÃ,¢?
Ãf,Ã, why it
gave the
result that it did. Seemed like an interesting area of research that could
maybe help in this area.
Re: computational substrateÃffÃ,¢?"some of my customers are the biggies in
the
semiconductor industry. As least in terms of silicon semiconductors,
MooreÃffÃ,¢?
Ts
Law is reaching its limit. Chip designers and manufacturers are trying to
wrestle with, for example, conductors on the chip that are only one or a
few
atoms wide. CanÃffÃ,¢?Tt shrink below that.
Cheers,
ÃffÃ,¢?" steve
On Jun 29, 2023, at 9:12 PM, Les Chambers <les at chambers.com.au> wrote:
Steve
RE : " I argue, human intelligence is not something that a computer could
ever
actually be capable of"
According to Max Tegmark the jury is still out on this question.
Max is a professor of physics at MIT and president of the Future of Life
Institute (ref Tegmark, Max. Life 3.0, Penguin Books Ltd). His claim to
fame
is
that he was in the room when Larry Page called Elon Musk a speceist.
Max provides evidence that the nature of intelligence is a subject for
debate
-
as yet, unresolved.
Max: "My wife and I recently had the good fortune to attend a symposium on
artificial intelligence organized by the Swedish Nobel Foundation, and when
a
panel of leading AI researchers were asked to define intelligence, they
argued
at length without reaching consensus.
.
there are many competing ones, including capacity for logic, understanding,
planning, emotional knowledge, self-awareness,creativity, problem solving
and
learning."
Max's pet definition is:
"intelligence = ability to accomplish complex goals"
Max makes some outrageous assertions:
"This substrate [platform] independence of computation implies that AI is
possible: intelligence doesn't require flesh, blood or carbon atoms."
Max uses the following metaphor:
"Waves, for instance: they have properties such as speed, wavelength and
frequency, and we physicists can study the equations they obey without even
needing to know what particular substance they're waves in."
Oh well, they say, "All great truths begin as blasphemies" (George Bernard
Shaw)
On the subject of determinism, Max goes on:
"Just as we don't fully understand how our children learn, we still don't
fully
understand how such neural networks learn, and why they occasionally fail.
.
Stuart Russell told me that he and many of his fellow AI researchers had
recently experienced a "holy s**t!" (HS) moment, when they witnessed AI
doing
something they weren't expecting to see for many years."
My question here is: "How do you run a hazard analysis on a system you
don't
understand that is likely, for further unknown reasons, to occasionally
fail?
"
This situation would be amusing but for the fact, "they" are already
amongst
us. Cruise (General Motors), Waymo (Google), and Zoox (Amazon) are
currently
operating driverless taxis on the streets of San Francisco. YouTube reports
that San Franciscans amuse themselves by jumping in front of them to see if
they will automatically stop?????
This AI non-determinism is a fact, confirmed by all practitioners who feel
free
to speak out. It's one of the core reasons why some senior practitioners
have
left developers like OpenAI and Google Deep Mind. It works too well and
they
don't know why - the neural network is a very simple approximation to the
human
brain. It shouldn't work that well -, and they don't know what it might be
capable of in the future. Which brings me to the unbounded nature of
artificial
intelligence.
From Max:
"Nobody knows for sure what the next blockbuster computational substrate
will
be, but we do know that we're nowhere near the limits imposed by the laws
of
physics. My MIT colleague Seth Lloyd has worked out what this fundamental
limit
is, and as we'll explore in greater detail in chapter 6, this limit is a
whopping 33 orders of magnitude (1033 times) beyond today's state of the
art
for how much computing a clump of matter can do. So even if we keep
doubling
the power of our computers every couple of years, it will take over two
centuries until we reach that final frontier."
The debate continues. Keep your powder dry Steve.
Les
Les,
ÃfffÃf,Ã,¢?osilicon is unboundedÃfffÃf,Ã,¢?Ãff,Ãf,Ã,Â
I disagree. Silicon is bounded, just in different ways than humans. For
one,
Turing Computability. Turing Machines, which all modern silicon computers
are
a
kind of, are 100% deterministic. Human intelligence is non-deterministic.
Thus,
I argue, human intelligence is not something that a computer could ever
actually be capable of. See, for example:
https://youtu.be/i2trJEIFIvY
I agree that, ÃfffÃf,Ã,¢?oWe are embarked; we need to deal with
itÃfffÃf,Ã,¢?
Ãff,Ãf,Ã,Â.
But we
also
have to be aware of the limits of so-called AI.
ÃfffÃf,Ã,¢?" steve
On Jun 26, 2023, at 9:15 PM, Les Chambers <les at chambers.com.au> wrote:
Hi
I put it to you and the list in general, a "discuss later" mentality is
foolish
in the extreme. The process of deploying intelligence in automated systems
has
, and will, fundamentally change. Among other things, international
bodies
that currently regulate software-intensive Safety-Critical systems - who
cling
to regulating processes that have ceased to exist - are likely to be
overrun
and made redundant.
In favour of organisations such as:
- The Center for Human-Compatible AI at UC Berkeley
- The Future of Life Institute
- The Center for AI Safety (CAIS)
- Stanford Center for AI Safety
My view is that this is not a steady-as-she-goes situation. This is a
major
inflection point in the evolution of intelligence. Carbon hosts will
always
be
limited; silicon is unbounded. We are embarked; we need to deal with it.
Suggested reading: Max Tegmark, Life 3.0
Les
C2 - Confidential
Hello,
For the moment, I don't see in industry any attempt to realise E/EPE
safety
related function with On-line AI. All what I see is focused on off-line
AI,
meaning that the training is done specified training data sets, and the
validation is done on specified test datasets. We don't see any
performance
better than roughly 3.3 10-3 in automotive. So let's wait for the
achievement
of 10-8 to 10-5 error rate on test datasets and discusss later the
acceptability.
AttentionÃffff, : due to increased cybersecurity screening I may receive
external
emails 30 minutes after their emission.
RESTRICTED
Bertrand RICQUE
Safety and security standardisation Expert
Program Support Manager
Optronics and Defence Division/Customer Support
Safran Electronics & Defense
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64
bertrand.ricque at safrangroup.com
102 Avenue de Paris
91300 MASSY FRANCE
www.safran-electronics-defense.com
-----Message d'origine-----
DeÃffff, : Les Chambers <les at chambers.com.au>
EnvoyÃfffffÃfff,Ãff,Ãf,Ã,©Ãffff, : lundi 26 juin 2023 06:03
Ãfffff?Ãffff, : RICQUE Bertrand (SAFRAN ELECTRONICS & DEFENSE)
<bertrand.ricque at safrangroup.com>; les at chambers.com.au;
koopman.cmu at gmail.com; systemsafety at lists.techfak.uni-bielefeld.de
ObjetÃffff, : RE: [SystemSafety] AI and the virtuous test Oracle
CAUTION: This message originated from an outside organization. In case of
suspicion, click on "Report to SAFRAN Security" from the Outlook ribbon.
RE your comment: ÃffffÃff,Ãf,Ã,¢?oas it impossible, by construction, to
identify
all
dangerous
situationsÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â
True. But it is possible to identify the subset of highest probability
ÃffffÃff,Ãf,Ã,¢?odangerous situationsÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â
and deal with them.
Ãffff, I
had 10 years
of
experience
with this issue in computer control of chemical processing reactors. In
the
1970s, we had a safety authority write software to: 1. identify well-
established dangerous states of a reactor; 2. Transition reactor to a safe
state
ÃffffÃff,Ãf,Ã,¢?" usually shut down. It was called the ÃffffÃff,Ãf,Ã,¢?
oabortÃffffÃff,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,Â
programming. This
abort code
overrode the control software written by another team. By today's
standards
it was primitive but still very effective.
This experience has influenced my thinking on ways and means of dealing
with ÃffffÃff,Ãf,Ã,¢?obadÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã, or
ÃffffÃff,Ãf,Ã,¢?
ohallucinatingÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â
AIs. We need a
separate and
distinct
ÃffffÃff,Ãf,Ã,¢?oabortÃffffÃff,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã, AI
capable of recognising evil and dealing with it.
Sal Khan (Khan Academy) has implemented another idea in creating his
Khanmigo personal tutor. To improve its accuracy in mathematics, he allows
the AI to have ÃffffÃff,Ãf,Ã,¢?opersonal thoughtsÃffffÃff,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,Â. The AI
equivalent of
mindfulness
(straight out of Marcus Aurelius). He gives the impression that he feeds
back
the AI's response to the AI, so the AI can have a think about what it is
thinking. A bit like briefing a fish on the concept of water.
This supports my pitch for engineers to be taught philosophy. The concepts
of virtue and evil were defined simply 2000 years ago. These definitions
have
stood outside of time and can be implemented with appropriate pattern
matching.
If Marcus Aurelius was reincarnated, he'd be running international
seminars
on mindfulness. The foundation ideas have not changed a wit.
Cheers
Les
C2 - Confidential
Answer to the first question. Once the NN is trained, it will always
give the
same output for the same input. It is stupid software logic.
Answer to the second question. The current safety paradigm is based on
perception-interpretation-decision-action loops based on the fact that:
1 Either, all states of the system, leading to all identified losses,
can be
exhaustively defined and their probability of occurrences remains, by
concept and construction of the system, below a tolerable level, enabling
fully autonomous functions (e.g. emergency shutdown functions in process
industries),
2 Or, there is a human in control of the system with the ability of
arbitrating the decisions, meaning the ability to at least perceive and
interpret.
AI deceives point 1 as it impossible, by construction, to identify all
dangerous situations (the training dataset is incomplete and whatever its
size, the operational domain being infinite (or close to infinite from a
practical
point of view), training dataset size divided by infinite = 0.
To deceive the second assumption, just replace a car windscreen by a
video
screen. No need of AI.
AttentionÃfffff,Ãffff, : due to increased cybersecurity screening I may
receive
external
emails 30 minutes after their emission.
RESTRICTED
Bertrand RICQUE
Safety and security standardisation Expert Program Support Manager
Optronics and Defence Division/Customer Support Safran Electronics &
Defense
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64
bertrand.ricque at safrangroup.com
102 Avenue de Paris
91300 MASSY FRANCE
https://urldefense.com/v3/__http://www.safran-electronics-
defense.com_
_;!!Dl6pPzL6!fVuNoqxkdCCqescc5hd8-9ke7-
o3uVuZEOBtugdgtfrHUBcO3T2a6LyI_
lZBwk7CZBJRmOhVQnZHXCdvIB_h5MY$
-----Message d'origine-----
DeÃfffff,Ãffff, : Les Chambers <les at chambers.com.au>
EnvoyÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©Ãfffff,Ãffff, :
jeudi 22
juin
2023 23:46 ÃffffffÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãfffff,Ãffff, : RICQUE
Bertrand (SAFRAN
ELECTRONICS &
DEFENSE)
<bertrand.ricque at safrangroup.com>; koopman.cmu at gmail.com;
systemsafety at lists.techfak.uni-bielefeld.de
ObjetÃfffff,Ãffff, : RE: [SystemSafety] AI and the virtuous test Oracle
CAUTION: This message originated from an outside organization. In
case of suspicion, click on "Report to SAFRAN Security" from the
Outlook
ribbon.
RE your comment,
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."A neural
network is
software
logic.ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ãf,
Ã,Â
A neural Network is a component of an AI agent. An AI agent is a
suite of general purpose software tools. The agent is configured from a
dataset.
ChatGPT goes on (see dialog below):
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."Synthesizing
an AI
agent
involves training the machine learning model on a large
dataset.ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ã
f,Ã,Â
This indicates that the training dataset determines the
agentÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃff,Ãf,Ã,¢"Ãf
ff,Ãff,Ãf,Ã,¢s
behaviour (transfer function - inputs to outputs). In the case of a
Tesla motor
vehicle,
the inputs are camera sensors and the outputs will soon be controls
to the accelerator, brake and steering.
RE your comment,
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."The problem is
not AI. It
is the
exhaustive
deterministic
relationship between the undesired losses and the system states.
Anything breaking this, breaks the current safety paradigm. And AI
is not the single thing to achieve
this.ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ãf,Ã
,Â
Could you please expand on this idea. Possibly using an example.
How would your thought apply here: I am a driver at the wheel of a
Tesla travelling at 110 km/h on the freeway. I have switched the
vehicle to full
self-
driving mode. I am fast asleep. The FSD agent detects a false
positive
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃff,Ãf,Ã,¢?o
a lady
with a pram. It jams on the brakes initiating a rear-end collision
with a following semi.
Q: In this case how is it that,
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."The problem
is not
AIÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ãf,Ã,Â
Les
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃff,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â
ChatGPT Dialog
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃff,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?
Ãfff,Ãff,Ãf,Ã,ÂÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬ÃffffÃf
f,Ãf,Ã,¢?Ãfff,Ãff,Ãf,Ã,Â
ChatGPT prompt:
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff."Describe an
AI agent from a
technical
perspective. What
are its
components
and
how is it synthesized?
ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ãffff,Ãfff,Ãff,Ãf,Ã,Â
Response:
An AI agent, from a technical perspective, typically consists of
several components that work together to perform specific tasks.
Here are the key components and how they are synthesized:
1. Data: The foundation of any AI agent is data. It can be in
various forms such as text, images, audio, or structured data. This
data is used for
training
and providing context to the AI model.
2. Machine Learning Model: The AI agent utilizes a machine learning
model
to
process and generate responses. These models are trained on vast
amounts of data and learn patterns to make predictions or generate
outputs.
3. Natural Language Processing (NLP): NLP is a crucial component for
understanding and processing human language. It involves tasks like
language translation, sentiment analysis, intent recognition, and
entity extraction.
NLP
enables the AI agent to comprehend user inputs and generate relevant
responses.
4. Knowledge Representation: Knowledge representation refers to how
information is stored and organized within the AI agent. It can
employ various
techniques
such as semantic networks, ontologies, or graph databases to store
facts
and
relationships. This allows the agent to access and utilize knowledge
when responding to user queries.
5. Dialogue Management: Dialogue management focuses on handling the
flow of conversation. It involves tracking the dialogue context,
managing user requests and responses, and generating meaningful and
coherent replies.
Reinforcement
learning techniques can be applied to improve dialogue management.
6. Integration and APIs: AI agents can integrate with external
systems and services using APIs (Application Programming
Interfaces). This allows them
to
access relevant data sources, databases, or external functionalities
to provide more comprehensive and accurate responses.
7. User Interface: The AI agent needs a user interface to interact
with
users
effectively. It can be a chat interface, voice-based interface, or a
combination of both. The UI should provide a seamless experience for
users to interact with the agent.
Synthesizing an AI agent involves training the machine learning
model on a large dataset. This process includes steps like
preprocessing the data, designing the model architecture, selecting
appropriate algorithms,
training
the model using optimization techniques, and fine-tuning it for
specific
tasks.
The trained model is then integrated with other components, such as
NLP, knowledge representation, and dialogue management, to create a
functioning AI agent. Regular updates, feedback loops, and
continuous improvement are essential to enhance the agent's
performance over time.
--------END Chat ----------
C2 - Confidential
The problem is not AI. It is the exhaustive deterministic
relationship
between the undesired losses and the system states. Anything
breaking this, breaks the current safety paradigm. And AI is not the
single thing to
achieve
this.
AttentionÃffffffÃffffÃff,Ãf,Ã,¢?sÃfffff,Ãffff, : due to increased
cybersecurity
screening I may
receive
external
emails 30 minutes after their emission.
RESTRICTED
Bertrand RICQUE
Safety and security standardisation Expert Program Support Manager
Optronics and Defence Division/Customer Support Safran Electronics
& Defense
P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64
bertrand.ricque at safrangroup.com
102 Avenue de Paris
91300 MASSY FRANCE
https://urldefense.com/v3/__http://www.safran-electronics-
defense.com__;!!Dl6pPzL6!bg5nEqH6ID136htdDa-
DgyRz2IZw9arqA9HPKC3p01ZYHoQhzebcyOn5xrgApDNO52A_sLRMk2YgX63j
n5fk7M4$
-----Message d'origine-----
DeÃffffffÃffffÃff,Ãf,Ã,¢?sÃfffff,Ãffff, : systemsafety <systemsafety-
bounces at lists.techfak.uni-
bielefeld.de>
De la part de Phil Koopman
EnvoyÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©ÃffffffÃffffÃff,Ãf,Ã,¢?
sÃfffff,Ãffff, : jeudi
22 juin
2023 03:32
ÃffffffÃffff?'ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢?
sÃffff,Ãfff,Ãff,Ãf,Ã,¬ÃffffffÃffffÃff,Ãf,Ã,¢?
sÃfffff,Ãffff, :
les at chambers.com.au; systemsafety at lists.techfak.uni-
bielefeld.de
ObjetÃffffffÃffffÃff,Ãf,Ã,¢?sÃfffff,Ãffff, : Re: [SystemSafety] AI and the
virtuous
test
Oracle
CAUTION: This message originated from an outside organization.
In case
of
suspicion, click on "Report to SAFRAN Security" from the Outlook
ribbon.
Les,
Since you welcome riffs, I have something that is not as all-
encompassing,
but might have more immediate application.
I propose that to the degree that "AI" technology is deployed in
a way
that
supplants practical human judgement, the manufacturer of that
system
(in
some cases just the AI part if it is an add-on component) should
be
held
accountable for any action (or inaction) that, if associated
with the
human
that was supplanted, would have constituted
negligence.ÃffffffÃffffÃff,Ãf,Ã,¢?
sÃfffff,Ãffff,
This
should
include
situations in which a human is put in an untenable situation of
supervising
an
AI in a way that puts unreasonable demands upon them, amounting
to a "moral crumple zone"
approach
(https://urldefense.com/v3/__https://papers.ssrn.com/sol3/papers.cfm
?ab
stract_id=2757236__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4
gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUivQZX7h$ ).
Liability/negligence if an AI is in substantive control of such
a
situation
should
attach to the manufacturer.
This leads to a more narrow oracle, but perhaps still useful,
than you propose. If a loss event is caused by a lack of
"reasonable" behavior
by an
AI,
the manufacturer is on the hook for negligence, and the
AI/manufacturer owes a duty of care the same as the human who
was supplanted would
have
owed to whoever/whatever might be affected by that negligence.
It has
the
advantage of reusing existing definitions of "reasonable person"
that
have
been hammered out over decades of law. (To be sure that is not
in the
form
of an engineering specification, but case law has a pretty
robust set
of
precedents, such as crashing into something after your properly
functioning
vehicle ran a red light is likely to lead to the driver being
found
negligent.)
This does not require the AI to behave the same as people, and
is not a
full
recipe for "safe" AI. But it puts a floor on things in a way
that is
readily
actionable using existing legal mechanisms and theories. If a
reasonable
person would have avoided a harm, any AI that fails to avoid the
harm
would
be negligent.
I've worked with a lawyer to propose this approach for automated
vehicles,
and it is starting to get some traction. What I write in this
post
(above)
is a
generalization of the concept beyond the narrow automated
vehicle application.
Details here:
https://urldefense.com/v3/__https://safeautonomy.blogspot.com/2023/0
5
/a-liability-approach-for-
automated.html__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg
9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUh295b5I$
-- Phil
On 6/21/2023 7:14 PM, Les Chambers wrote:
Hi All
I find myself reflecting on what will become of us.
As systems engineering best practice is overrun by AI.
Practitioners report that neural networks are eating code.
Example 1: The vector field surrounding a Tesla motor vehicle
is an output of a neural network, not the result of software
logic. Soon
the
neural net - not code - will generate controls. The size of
the code base is reducing. (Elon
Musk)
Example 2: the ChatGPT transformer code base is only 2000 LOC
(Mo Gawdat
https://urldefense.com/v3/__https://youtu.be/bk-
nQ7HF6k4__;!!Dl6pPzL6!
dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xN
rnA0Av
-DS0QOuOFaUpIawiVG$ )
The intelligence resides in terabytes of data, perceptrons and
millions of weighting parameters. All are gathered by
automated
means.
Not subject to human review.
Ergo what will become of our trusty barriers to dangerous
failure:
1. Safety functions - gone
2. Verification - gone
3. Code reviews - gone
4. Validation - How?
On validation, may I suggest the moral AI. A test oracle built
on a virtuous dataset, capable of interrogating the target
system to determine virtue. Test outcomes will morph from
pass/failure to
moral/immoral.
Credible industry players have predicted that soon we will
have AIs orders of magnitude smarter than us. Especially when
they start
talking
to
each other.
The bandwidth will be eye-watering - the increase in
intelligence,
vertical.
New barriers are required. Time to develop an AI that is on
our side
ÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¢ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢?
sÃffff,Ãfff,Ãff,Ãf,Ã,¬ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢,Ãfff,Ãff,Ãf,Ã,¬Ã
ffff."
the side of ethics and the moral life. An adult in the room if
you like. We should birth this creature now and raise it as good
parents.
Let us not panic. May I put the proposition: virtue, like
creativity, can be algorithmic.
I have a sense of starting from the beginning - tabula rasa. I
suggest
that high-level thinking on the subject could begin with
ChatGPT
prompts:
1. What is the stoic
philosopherÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¢ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢?
sÃffff,Ãfff,Ãff,Ãf,Ã,¬ÃfffffÃfff,Ãff,Ãf,Ã,¢ÃffffÃff,Ãf,Ã,¢?
zÃffff,Ãfff,Ãff,Ãf,Ã,¢s concept of
virtue?
2. What are the elements of philosophy relevant to AI?
Let us not forget our engineering mission: Guardians of the
divine Logos, the organizing principle of the universe,
responsible for its creation, maintenance, and order.
Would anyone care to riff on this?
Les
--
Les Chambers
les at chambers.com.au
systemsengineeringblog.com
+61 (0)412 648 992
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription:
https://urldefense.com/v3/__https://lists.techfak.uni-
bielefeld.de/mai
lman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaC
GJCz
dLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-
DS0QOuOFaUjXMdlNF$
--
Prof. Phil Koopman koopman at cmu.edu
(he/him)
https://urldefense.com/v3/__https://users.ece.cmu.edu/*koopman/__;fg!
!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YM
f2lF9o5xNrnA0Av-DS0QOuOFaUnsUrevc$
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription:
https://urldefense.com/v3/__https://lists.techfak.uni-
bielefeld.de/mailman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVL
cd
XXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-
DS0QOuOFaUjXMdlNF$
#
" Ce courriel et les documents qui lui sont joints peuvent
contenir des
informations confidentielles,
ÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,ªtre soumis
aux
rÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,¨glementations
relatives au
contrÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,´le des exportations ou
ayant un
caractÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,¨re
privÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©.
S'ils ne
vous sont
pas destinÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©s, nous vous signalons
qu'il est
strictement
interdit
de les divulguer, de les reproduire ou d'en utiliser de quelque
maniÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,¨re que ce
soit
le contenu. Toute exportation ou
rÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©exportation
non
autorisÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©e
est interdite Si ce message vous a
ÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©tÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff
,Ãf,Ã,©
transmis par
erreur,
merci d'en informer l'expÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©diteur
et de
supprimer
immÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©diatement
de votre systÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,¨me informatique ce
courriel
ainsi
que tous
les
documents qui y sont attachÃffffffÃffff?'Ãfffff,Ãffff,Ãfff,Ãff,Ãf,Ã,©s."
******
" This e-mail and any attached documents may contain confidential
or
proprietary information and may be subject to export control laws
and regulations. If you are not the intended recipient, you are
notified that
any
dissemination, copying of this e-mail and any attachments thereto or
use of their contents by any means whatsoever is strictly
prohibited. Unauthorized export or re-export is prohibited. If you
have received this e-mail in
error,
please advise the sender immediately and delete this e-mail and all
attached
documents from your computer system."
#
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
#
" Ce courriel et les documents qui lui sont joints peuvent contenir
des
informations confidentielles, ÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,ªtre soumis aux
rÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¨glementations
relatives au
contrÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,´le des exportations ou ayant un
caractÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¨re
privÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©.
S'ils ne
vous sont
pas destinÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©s, nous vous signalons qu'il est
strictement
interdit
de les
divulguer, de les reproduire ou d'en utiliser de quelque
maniÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¨re
que ce
soit
le contenu. Toute exportation ou rÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©exportation
non
autorisÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©e est
interdite Si ce message vous a
ÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©tÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©
transmis
par erreur,
merci
d'en
informer l'expÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©diteur et de supprimer
immÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©diatement de
votre
systÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,¨me informatique ce courriel ainsi que tous
les
documents qui
y
sont
attachÃffffffÃffff,Ãfff,Ãff,Ãf,Ã,©s."
******
" This e-mail and any attached documents may contain confidential or
proprietary information and may be subject to export control laws and
regulations. If you are not the intended recipient, you are notified that
any
dissemination, copying of this e-mail and any attachments thereto or use
of
their contents by any means whatsoever is strictly prohibited.
Unauthorized
export or re-export is prohibited. If you have received this e-mail in
error,
please advise the sender immediately and delete this e-mail and all
attached
documents from your computer system."
#
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
#
" Ce courriel et les documents qui lui sont joints peuvent contenir des
informations confidentielles, ÃfffffÃfff,Ãff,Ãf,Ã,ªtre soumis aux
rÃfffffÃfff,Ãff,Ãf,Ã,¨glementations
relatives
au
contrÃfffffÃfff,Ãff,Ãf,Ã,´le des exportations ou ayant un
caractÃfffffÃfff,Ãff,Ãf,Ã,¨re
privÃfffffÃfff,Ãff,Ãf,Ã,©.
S'ils ne vous
sont
pas destinÃfffffÃfff,Ãff,Ãf,Ã,©s, nous vous signalons qu'il est
strictement
interdit
de
les
divulguer, de les reproduire ou d'en utiliser de quelque
maniÃfffffÃfff,Ãff,Ãf,Ã,¨re
que
ce
soit
le contenu. Toute exportation ou rÃfffffÃfff,Ãff,Ãf,Ã,©exportation non
autorisÃfffffÃfff,Ãff,Ãf,Ã,©e
est
interdite Si
ce message vous a ÃfffffÃfff,Ãff,Ãf,Ã,©tÃfffffÃfff,Ãff,Ãf,Ã,© transmis
par erreur,
merci d'en
informer
l'expÃfffffÃfff,Ãff,Ãf,Ã,©diteur
et de supprimer immÃfffffÃfff,Ãff,Ãf,Ã,©diatement de votre
systÃfffffÃfff,Ãff,Ãf,Ã,¨me
informatique ce
courriel
ainsi
que tous les documents qui y sont attachÃfffffÃfff,Ãff,Ãf,Ã,©s."
******
" This e-mail and any attached documents may contain confidential or
proprietary information and may be subject to export control laws and
regulations. If you are not the intended recipient, you are notified that
any
dissemination, copying of this e-mail and any attachments thereto or use
of
their contents by any means whatsoever is strictly prohibited.
Unauthorized
export or re-export is prohibited. If you have received this e-mail in
error,
please advise the sender immediately and delete this e-mail and all
attached
documents from your computer system."
#
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription: https://lists.techfak.uni-
bielefeld.de/mailman/listinfo/systemsafety
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20230722/54239c74/attachment-0001.html>
More information about the systemsafety
mailing list